Welcome!

Microsoft Cloud Authors: Kevin Benedict, Pat Romanski, Liz McMillan, Lori MacVittie, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Feed Post

Elastic Scaling of APIs in the Cloud

Here’s a quick set of practices we used to achieve our goal

As an Enterprise Architect for Intel IT, I worked with IT Engineering and our Software and Services group on the elastic scaling of the APIs that power the Intel AppUp® center. Our goal was to scale our APIs to at least 10x our baseline capacity (measured in transactions per second) by moving them to our private cloud, and ultimately to be able to connect to a public cloud provider for additional availability and scalability. Here’s a quick set of practices we used to achieve our goal:

  1. Virtualize everything.  This may seem obvious and is probably a no-op for new APIs, but in our case we were using a bare-metal installs at our gateway and database layers (the API servers themselves were already running as VMs). While our gateway hardware appliance had very good scalability, we knew we were ultimately targeting the public cloud and that our need for dynamic scaling could exceed our ability to add new physical servers.  Using a gateway that scales in pure software virtual machines without the need for special purpose-built hardware helped us achieve our goal here.
  2. Instrument everything.  We needed to be able to correlate leading indicators like transactions per second to system load at each layer so we could begin to identify bottlenecks. We also needed to characterize our workload for testing – understanding a real-world sequence of API methods and mix/ordering of reads and writes. This allowed us to create a viable set of load tests.
  3. Identify bottlenecks.  We used Apache jmeter to generate load and identify points where latency became an issue, correlating that against system loads to find out where we had reached saturation and needed to scale.
  4. Define a scaling unit. In our case, we were using dedicated DB instances rather than database-as-a-service, so we decided to scale all three layers together. We identified how many API servers would saturate the DB layer, and how many gateways we would need to manage the traffic. We then defined a collection of VMs that would provision all of these VMs together. We might have scaled each layer independently had our API been architected differently, or if we were building from scratch on database-as-a-service.

    Example collection for elastic scaling

  5. Repeat. The above let us scale from 1x to about 5x or 6x without any problem. However, when we hit 6x scaling we discovered that a new bottleneck: the overhead of replicating commits across the database instances. We went back to the drawing board and redesigned the back end for eventual consistency so we could reduce database load.
  6. Automate everything.  We use Nagios and Puppetto monitor and respond to health changes. A new scaling unit is provisioned when we hit predefined performance thresholds.

     

    Automation/Orchestration workflow

  7. Don’t forget to test scaling down.  If you set a threshold for removing capacity, it’s important to make sure that your workflow allows for a graceful shutdown and doesn’t impact calls that are in progress.

The above approach got us to 10x our initial capacity in a single data center. Because of some of our architecture decisions (coarse-grained scaling units and eventual consistency) we were then able to add a GLB and scale out to multiple data centers – first to another internal private cloud and then to a public cloud provider.

Read the original blog entry...

More Stories By Application Security

This blog references our expert posts on application and web services security.

IoT & Smart Cities Stories
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
SYS-CON Events announced today that IoT Global Network has been named “Media Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. The IoT Global Network is a platform where you can connect with industry experts and network across the IoT community to build the successful IoT business of the future.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear these words all day every day... lofty goals but how do we make it real? Add to that, that simply put, people don't like change. But what if we could implement and utilize these enterprise tools in a fast and "Non-Disruptive" way, enabling us to glean insights about our business, identify and reduce exposure, risk and liability, and secure business continuity?