Wednesday, July 10, 2019

AWS vs. Azure vs. Google: What’s Best for You?




AWS pros and cons

                   As mentioned before, the reasons for picking one vendor over another will differ for each customer. But there are aspects of the competing clouds that will offer benefits in certain circumstances.The breadth and depth of the AWS offering is seen as a plus for AWS.

AWS had a head start on the competition, building out its suite of cloud services since 2006. All of these are built to be enterprise-friendly so that they will appeal to CIOs as well as its core audience of developers.The vendor ranks highly on platform configuration options, monitoring and policy features, security and reliability. Its partner ecosystem and general product strategy are also seen as market leading, and its AWS Marketplace has a large number of third-party software services.

Another of the benefits of the AWS cloud is its openness and flexibility. For example, Transport for London - which also relies on Azure in other parts of its operations - has used AWS to meet spikes in demand for its online services such as its Journey Planner tool.However, one area AWS falls short to some degree is with its hybrid cloud strategy. Unlike Microsoft, AWS has tended to be dismissive of the benefits of on-premise private clouds. Many organisations prefer to keep sensitive data within their own data centres - such as those in the financial sector - using public clouds for other purposes. At the same time, this clearly has not deterred many customers from using AWS as part of their cloud strategy, regardless of whether they plan to move all systems to the cloud or not.

Another downside to AWS is the scale of its offering. While this is an attraction in many senses, it can be difficult at times to navigate the large numbers of features that are on offer, and some see AWS as being a complex vendor to manage.

Microsoft Azure pros and cons

                The big pull for Azure is where Microsoft already has a strong footing within an organisation and can easily play a role in helping those companies transition to the cloud. Azure naturally links well with key Microsoft on-premise systems such as Windows Server, System Center and Active Directory.In addition, while both AWS and Azure have PaaS capabilities, this is a particular strength of Microsoft’s.

One of the downsides, however, has been a series of outages over the years. Gartner analyst Lydia Leong has recommended considering disaster recovery capabilities away from Azure for critical applications hosted in the cloud. AWS isn't immune to downtime, though, suffering a major S3 outage of its own in March 2017.As part of its 2017 IaaS global Magic Quadrant, Gartner states that its clients have had issues with "technical support, documentation, training and breadth of the ISV partner ecosystem" - but the company has been steadily working on these areas.

Whereas AWS provides users with many options for supporting other platforms, Azure can be somewhat restrictive in comparison. If you want to run anything other than Windows Server then Azure might not be the best solution, but Microsoft has been willing to embrace open source platforms, if a little slowly. For example, the company has been busy extending its support for Linux operating systems in 2017.


Google Cloud Platform pros and cons

              Google has a good track record with innovative cloud-native companies and has a good standing in the open source community, but has traditionally struggled to break into the enterprise market.Its go-to-market strategy has been focused on proving itself on smaller, innovative projects at large organisations, rather than becoming a strategic cloud partner. Increasing the breadth of its partnerships and supporting pre-cloud businesses and IT processes will need to become focus areas if it wants to attract more traditional enterprises.

The company is certainly betting big on its machine learning tools, with the company's internal AI expertise and popular TensorFlow framework as selling points in what is set to become a key battleground.

It has also proved itself more than an AWS copycat, launching innovative features in the machine learning space as well as its BigQuery analytics engine, and the Cloud Spanner distributed database.It is also worth noting that Google has the smallest footprint of global instances of the big three.

The best public cloud vendor for you is going to depend on your needs and your workloads. In fact, the best vendor for some of your projects might not be the best vendor for others of your projects. Many experts believe that the majority of enterprises will pursue a multi-cloud strategy in the near future either in an effort to prevent vendor lock-in or in an effort to match workloads with the best available service.

• The AWS Choice: You can’t go wrong with AWS due to its rich collection of tools and services and massive scale. The only reason not to choose Amazon is if you want a more personal relationship, something a small boutique shop can offer. At its size, it’s hard for Amazon to have a close relationship with every customer, but there are resellers and consultants who can offer that type of attentive focus.

• The Azure Choice: Microsoft’s greatest appeal is, of course, to Microsoft shops. All of your existing .Net code will work on Azure, your Server environment will connect to Azure, and you will find it easy to migrate on-premises apps. If you want Linux, DevOps, or bare metal, however, Microsoft would not be the ideal choice. It offers Linux but it takes a back seat in priority to Windows. DevOps is primarily a Linux/open source play, again, something Microsoft does not specialize in.

• The Google Choice: Google is growing quickly but is a work in progress. Its offerings were meager and it didn’t have a legacy background in dealing with businesses. But it is fully committed and plowed billions into its cloud efforts. And it is partnered with Cisco, which does know the enterprise. The people who should look at Google now are the ones who looked a year ago and didn’t like what they saw. They might be surprised. Google has built its cloud on its strength, which is scale and machine learning. it’s clearly worth a look.


Bottom line: Certain types of companies will be more attracted to certain cloud vendors. So again, if your firm runs Windows and a lot of Microsoft software, you'll probably want to investigate Azure. If you are a small, Web-based startup looking to scale quickly, you might want to take a good look at Google Cloud Platform. And if you are looking for the provider with the broadest catalog of services and worldwide reach, AWS will probably be right for you.



Wednesday, June 12, 2019

Four types of risk management technology



An Overview about the Risk management Technologies we can use ,

1. Risk Dashboards

Dashboards are probably the easiest type of technology to put in place, and many enterprise project management tools come with this feature. You can create risk dashboards manually, but it’s a time-consuming process that results in a report that is out of date from the moment it’s finished.

2. Automated Processes


A further type of tech that you can adopt for risk management is automating processes through workflows within a tool.This means that your process of risk identification, assessment, management, monitor, control and escalation is managed through a single process within a tool. You document a risk in the tool, assign it to the right person to assess and they will automatically get a notification that they have work to do.


3. Risk Assessment Tools

You can use software tools to help with risk assessment too. This increases the likelihood that risks are assessed in the same way, against the same model. In turn, this makes it easier to compare risks across program or portfolio level and have confidence that they really are comparable.Basic risk assessment tools are often included in enterprise project management solutions. Add the impact and probability of the risk into the tool and it will generate a RAG (red/amber/green) status for the risk. This is a simple assessment tool that you can do manually, but managing it in the tool increases standardization.


4. Advanced Risk Management Tools

Risk management software tools can take your risk management even further. They can do risk modelling, run scenarios and flag problems through early warning indicators in your reporting. When the data is in the tool, advanced risk management tech can take the heavy lifting out of managing your risks. This functionality is sometimes available in your enterprise systems, and available as standalone products too.


Risk is about uncertainty. If you put a framework around that uncertainty, then you effectively de-risk your project. And that means you can move much more confidently to achieve your project goals. By identifying and managing a comprehensive list of project risks, unpleasant surprises and barriers can be reduced and golden opportunities discovered. The risk management process also helps to resolve problems when they occur, because those problems have been envisaged, and plans to treat them have already been developed and agreed. You avoid impulsive reactions and going into “fire-fighting” mode to rectify problems that could have been anticipated. This makes for happier, less stressed project teams and stakeholders. The end result is that you minimize the impacts of project threats and capture the opportunities that occur.



Saturday, May 18, 2019

Risk Management


 As a project manager or team member, you manage risk on a daily basis; it’s one of the most important things you do. If you learn how to apply a systematic risk management process, and put into action the core 5 risk management process steps, then your projects will run more smoothly and be a positive experience for everyone involved.

A common definition of risk is an uncertain event that if it occurs, can have a positive or negative effect on a project’s goals. The potential for a risk to have a positive or negative effect is an important concept. Why? Because it is natural to fall into the trap of thinking that risks have inherently negative effects. If you are also open to those risks that create positive opportunities, you can make your project smarter, streamlined and more profitable. Think of the adage –“Accept the inevitable and turn it to your advantage.” That is what you do when you mine project risks to create opportunities.


Risk Management is a five step process:

Step 1: Identify the Risk.

Step 2: Analyze the risk.

Step 3: Evaluate or Rank the Risk.

Step 4: Treat the Risk.

Step 5: Monitor and Review the risk.


RISK ANALYSIS


All risks identified will be assessed to identify the range of possible project outcomes. Qualification will be used to determine which risks are the top risks to pursue and respond to and which risks can be ignored.

Qualitative Risk Analysis

The probability and impact of occurrence for each identified risk will be assessed by the project manager, with input from the project team using the following approach:

Probability

•            High – Greater than <70%> probability of occurrence

•            Medium – Between <30%> and <70%> probability of occurrence

•            Low – Below <30%> probability of occurrence

Probability

•            High – Risk that has the potential to greatly impact project cost, project schedule or performance

•            Medium – Risk that has the potential to slightly impact project cost, project schedule or performance

•            Low – Risk that has relatively little impact on cost, schedule or performance

Risks that fall within the RED and YELLOW zones will have risk response planning which may include both a risk mitigation and a risk contingency plan.

Quantitative Risk Analysis

Analysis of risk events that have been prioritized using the qualitative risk analysis process and their affect on project activities will be estimated, a numerical rating applied to each risk based on this analysis, and then documented in this section of the risk management plan.


RISK RESPONSE PLANNING

Each major risk (those falling in the Red & Yellow zones) will be assigned to a project team member for monitoring purposes to ensure that the risk will not “fall through the cracks”.

For each major risk, one of the following approaches will be selected to address it:

•            Avoid – eliminate the threat by eliminating the cause

•            Mitigate – Identify ways to reduce the probability or the impact of the risk

•            Accept – Nothing will be done

•            Transfer – Make another party responsible for the risk (buy insurance, outsourcing, etc.)


For each risk that will be mitigated, the project team will identify ways to prevent the risk from occurring or reduce its impact or probability of occurring. This may include prototyping, adding tasks to the project schedule, adding resources, etc.

For each major risk that is to be mitigated or that is accepted, a course of action will be outlined for the event that the risk does materialize in order to minimize its impact.

Tuesday, April 23, 2019

DevOps Case Study




                               Agile is a set of values and principles about how to produce i.e. develop software. For Example,if you have some ideas and you want to turn those ideas into working software, you can use the Agile values and principles as a way to do that. But, that software might only be working on a developer’s laptop or in a test environment. You want a way to quickly, easily and repeatably move that software into production infrastructure, in a safe and simple way. To do that you need DevOps tools and techniques.Though the implementation of DevOps is always in sync with Agile methodologies, there is a clear difference between the two. The principles of Agile are associated to seamless production or development of a piece of software. On the other hand, DevOps deals with development, followed by deployment of the software, ensuring faster turnaround time, minimum errors, and reliability.

Much has been written about what DevOps is, but not a lot has been said about what it can do for an organization. The trending software development approach has many quantifiable technical and business benefits, including shorter development cycles, increased deployment frequency, and faster time to market. But because it relies so heavily on increased communication, collaboration, and innovation, it can also be a catalyst for cultural change within an organization.

1.ETSY

Etsy is a peer-to-peer e-commerce website focused on handmade or vintage items and supplies, as well as unique factory-manufactured items.For its first several years, Etsy struggled with slow, painful site updates that frequently caused the site to go down. In addition to frustrating visitors, any downtime impacted sales for Etsy's millions of users who sold goods through the online marketplace and risked driving them to a competitor.

With the help of a new technical management team, Etsy transitioned from its waterfall model, which produced four-hour full-site deployments twice weekly, to a more agile approach. Today, it has a fully automated deployment pipeline, and its continuous delivery practices have reportedly resulted in more than 50 deployments a day with fewer disruptions. And though Etsy has no DevOps group per se, its commitment to collaboration across teams has made the company a model of the DevOps framework.


2.Fidelity Worldwide Investment

Fidelity Worldwide Investment had several business units developing software applications and was burdened with legacy release processes that placed huge demands on its teams. Apps were deployed manually across hundreds of servers, with each app requiring customization. Manually introduced errors frequently broke the process.When it came time to develop a critical trading application with a firm launch date, the organization knew its error-prone manual process would jeopardize the project. Fidelity used the opportunity to embrace a DevOps approach and implement an automated software release framework that would enable it to meet the rollout schedule.That solution resulted in more than $2.3 million per year in cost avoidance for that app alone. Since then, the Fidelity team has automated the release of dozens of applications, reducing release times from two to three days to one to two hours and decreasing test-team downtime. The process has also made it easier to display regulatory compliance and has enabled predictable release schedules that stakeholders can rely on.


3.Sony Pictures Entertainment's Digital Media Group

(DMG) faced significant challenges delivering a software system to manage entertainment assets for end users. Manual processes and other hurdles typically resulted in a months-long delay between completion of software development and delivery.To smooth out this "last mile," DMG implemented an automated cloud delivery system composed of open source tools and SaaS solutions. Since adopting a continuous delivery model, DMG has cut down its months-long delivery time to just minutes. This allowed developers to focus on adding features and reduced idle resources and associated costs.



Tuesday, March 5, 2019

SWIFT System


                   
                     With an International Debit Card, money can be withdrawn at any where in world, the card holder need not have an account with the card issuing financial institution. How this happens?
SWIFT in Investment Banking Settlements: As given by Investopedia SWIFT stands for the Society for Worldwide Interbank Financial Telecommunications. It is a messaging network that financial institutions use to securely transmit information and instructions through a standardized system of codes.

SWIFT SYSTEM

                     The service of Telex is too slow and had no standardized format for the data it transfers, added up to an inefficient system apart from its insecurity. In order to solve the demerits of Telex services seven major international banks gathered together to discuss a suitable replacement of telex in the year 1974. After three years, in 1977, a society was formed and 230 member banks from 5 countries started operation of SWIFT. SWIFT has now more than 10000 members worldwide (more than 200 countries) handles more than 15 million messages daily. Any financial institution who holds a banking license can become a member of SWIFT by paying a joining fee and service charge for each message sent.

                  Using these messages, banks can exchange data for funds transfer between financial institutions. SWIFT enables customers to automate and standardise financial transactions, thereby lowering costs, reducing operational risk and eliminating inefficiencies from their operations.
Although there are other messaging services available such as Fedwire, Ripple and CHIPS, SWIFT maintains its dominant market position. It does this by continually investing in innovation and adding new message codes to further facilitate funds transfer and straight through processing. One recent initiative introduced by SWIFT is the Global Payments Innovation (gpi) which aims to increase the speed, predictability and transparency of cross-border payments.

                 Business to business wire transfers through banks have always been slow and costly despite technological advances which have seen other areas in the payments industry progress. Over 90 leading transaction banks from Europe, Asia Pacific, Africa and the Americas are already signed up to the SWIFT gpi initiative which is now in operation. The first phase of the SWIFT gpi focuses on B2B payments. The goal is to help corporates to improve supplier relations whilst achieving greater treasury efficiencies by enhancing the payments service:
Beneficiaries will now receive same day access to payments instead of waiting periods of several days.
Businesses will know in advance how much a bank transfer will cost adding further transparency to fees in the transfer process.
End to end payments tracking through a cloud based service will allow easy tracing of funds from initiation through intermediary banks to the recipient bank account. Message notification that funds have reached the beneficiary account will also be sent to the payer.

               Swift Code is a standard format of Bank Identifier Codes (BIC) and it is unique identification code for a particular bank. These codes are used when transferring money between banks, particularly for international wire transfers. Banks also used the codes for exchanging other messages between them.

The Swift code consists of 8 or 11 characters. When 8-digits code is given, it refers to the primary office. The code formatted as below;

AAAA BB CC DDD
First 4 characters - bank code (only letters)
Next 2 characters - ISO 3166-1 alpha-2 country code (only letters)
Next 2 characters - location code (letters and digits) (passive participant will have "1" in the second character)
Last 3 characters - branch code, optional ('XXX' for primary office) (letters and digits)

SWIFT is solely a carrier of messages. It does not hold funds nor does it manage accounts on behalf of customers, nor does it store financial information on an on-going basis. As a data carrier, SWIFT transports messages between two financial institutions. This activity involves the secure exchange of proprietary data while ensuring its confidentiality and integrity.

There are four key areas that SWIFT services fall under within the financial marketplace. They are Securities, Treasury and Derivatives, Trade Services, and Payments & Cash Management.
SWIFT messages consist of five blocks of data including three headers, message content, and a trailer. They are identified in a consistent manner. They all start with the literal ‘MT’ which denotes Message Type. This is followed by a 3-digit number that denotes the message type, category, and group.

MT103 for example, is a SWIFT payment message type used in cash transfers specifically for cross border / international wire transfer and is predominately used between Banks and Non-Bank Financial Institutions. MT103 is used to make a single payment and it has a large number of options to describe exactly how the payment should be (for example, determining the beneficiary account and sender bank details).Few other multiple standard file formats used are EDIFACT, ANSI X12, SAP and ISO 20022 XML formats.

Wednesday, February 6, 2019

Facebook Dark Launching Technique


Have you ever wondered why some of your friends get to see the latest Facebook messenger or Gmail upgrade before you do? Why do some people get to play with the newest features weeks before others? And, why do these features sometimes disappear within a week? The answer is a dark launch.

                          Facebook and Google, along with many leading tech giants, use dark launches to gradually release and test new features to a small set of their users before releasing to everyone. This lets them see if you love it or hate it and assess how it impacts their system’s performance. Facebook calls their dark launching tool “Gatekeeper” because it controls consumer access to each new feature.Dark launching is the process of gradually rolling out production-ready features to a select set of users before a full release. This allows development teams to get user feedback early on, test bugs, and also stress test infrastructure performance. A direct result of continuous delivery, this method of release helps in faster, more iterative releases that ensure that application performance does not get affected and that the release is well received by customers.

                       It is called a dark launch because these feature launches are typically not publicized, but rather, they are stealthily rolled out to 1 percent, then 5 percent, then 30 percent of users and so on. Sometimes, a new feature will dark launch for a few days and then you will never see it again. Likely, this is because it did not perform well or the company just wanted to get some initial feedback to guide development.In 2011, Facebook rolled out a slew of new features – timeline, ticker and music functionalities – to its 500 million users spread across the globe. The huge traffic that was generated on Facebook following the release led to a server meltdown. The features that were rolled out garnered mixed response from users which led to inconclusive results of the effectiveness of the new features, leaving them with no actionable insights.This led to an evaluation and reassessment of strategies, resulting in Facebook coming up with the Dark Launching Technique. Using the DevOps principles, Facebook created the following methodology for the launch of its new releases.

During Dark Launch one deployment pipeline is turned on to deploy the new features to a select set of users. The remaining hundreds of pipelines are all turned off at this point. The specific user base on which the features have been deployed are continuously monitored to collect feedback and identify bugs. These bugs and feedback will be incorporated in development, tested and deployed on the same user base until the features becomes stable. Once stability is achieved, the features will be gradually deployed on other user bases by turning on other deployment pipelines.             

                      Facebook does this by wrapping code in a feature flag or feature toggle which is used to control who gets to see the new feature and when. This exposes pain points and areas of the application’s infrastructure that needs attention prior to the full-fledged launch while still simulating the full effect of launching the code to users. Once the features are stable, they are deployed to the rest of the users over multiple releases. 

                     This way Facebook has a controlled or stable mechanism for developing new functionality to its massive user base. On the contrary if the feature does not get a good response they have an option to rollback on their deployments altogether. This also helps them to prepare their servers for deployment as they can predict the user activity on their website and they can scale up their servers accordingly. 

                    Facebook, Amazon, Netflix and Google, along with many leading tech giants, use dark launches to gradually release and test new features to a small set of their users before releasing to everyone.The intention of DevOps is to create better-quality software more quickly and with more reliability while inviting greater communication and collaboration between teams. It is also an automation process that allows quick, safe and high quality software development and release while keeping all the stakeholders in the loop. This is the real reason why DevOps is seeing an all-time high adoption leading to increasing career opportunities in DevOps.


Friday, January 18, 2019

CI/CD - A Overview

Continuous Integration is the practice of continuously integrating the changes made to the project and testing them accordingly at least on a daily basis or more frequently.Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.

Providing your users with the best possible software is always the number one priority. But doing so in the fast-paced, ever-changing technology landscape we live in isn’t a simple task. As soon as an update is deployed, it seems like the need for the next is already here. This has been a constant battle of development teams for decades. The introduction of DevOps was a great start, the need to become even more efficient still remains.

Enter: continuous delivery (CD), which is the process of ensuring your software is always ready for deployment. It goes hand-in-hand with the DevOps and agile movement. Deployments are available with the push of a button, and rollbacks (when necessary) are seamless.

When your build is fast and well-automated, you build and test the whole system more frequently. You catch bugs earlier and, as a result, spend less time debugging. You integrate your software frequently without relying on complex background build systems, which reduces integration problems.

Behind every great CD pipeline is a well-oiled continuous integration (CI) pipeline. CI is the process of developers pushing code early and often into a code repository for automated integration testing. The idea is that every piece of code sent to the repository will be tested within minutes and flagged if any errors occur. If an error does occur, that now becomes the main priority of the developer.

Without continuous integration, back-tracking errors can be a tedious and time-consuming process, and it’s easy to get lost in the maze of code changes. Integrating as early and often as possible makes it easier to identify where the error occurs. CI also reduces the risk of having errors flare up further down the CD pipeline.



While the specific CI process can vary slightly based on a team’s preference, the essential steps are:

1.Build code

2.Send code to repository

3.Test code

4.Send back errors

5.Fix code

6.Repeat step 2 and 3


Testing Process

Once cleared through the code repository tests, the next step is to send your code to more expansive staging and production tests. Once again, the key here is automation. For an effective CD pipeline, little to no effort should be going into these tests, allowing your team to focus on development in the CI stage. The tests performed through the stage typically include regression tests (making sure the application works) and performance tests (making sure the application works efficiently).

These tests should confirm that your application works as it should and integrates properly with your existing platform. In many cases, teams will have small user groups test deployments in production-like environments. This gives your team explicit feedback from customers.


To Deploy/Not to Deploy

This is where continuous delivery and continuous deployment fundamentally disagree. In a standard continuous delivery pipeline, once your application is cleared through testing and declared ready for deployment, it sits in a deployment queue. It is not deployed until manually done so. On the contrary, in a continuous deployment pipeline, as soon as an application passes through testing, it is automatically deployed to end-users.

Monitor

Just as DevOps doesn’t end for developers after the code is written, continuous delivery doesn’t end once an application is deployed. Inevitably some applications will have errors, despite all of the testing you worked so hard to automate. This is completely normal. Catching the errors early is critical to the well-being of your application and to the user experience.


CI/CD is a devops best practice because it addresses the misalignment between developers who want to push changes frequently, with operations that want stable applications. With automation in place, developers can push changes more frequently. Operations teams see greater stability because environments have standard configurations, there is continuous testing in the delivery process, environment variables are separated from the application, and rollback procedures are automated.

AWS vs. Azure vs. Google: What’s Best for You?

AWS pros and cons                    As mentioned before, the reasons for picking one vendor over another will differ for each custo...