Microsoft Cloud Authors: Jim Kaskade, Lori MacVittie, Andreas Grabner, Janakiram MSV, Pat Romanski

Related Topics: Containers Expo Blog, Microservices Expo, Microsoft Cloud, API Journal, Agile Computing, Wearables, Apache

Containers Expo Blog: Article

Web Application Lifecycle Maintenance

A ten-point tune-up check list

Like an automobile, a web application needs occasional maintenance and management over its life cycle. Although it doesn't need oil changes, it will probably need version upgrades. There may not be manufacturer recalls, but sometimes servers fail or hang. An application doesn't need to be washed and detailed, but it does need to be backed up. And both cars and applications need occasional performance tuning.

This article provides a complete list of the system management functions that need to be performed on a standard architecture web application, with a particular emphasis on doing so in an Infrastructure-as-a-Service environment.

1. Evaluation
Anyone who has implemented an application without sufficient evaluation, only to realize too late that it does not solve the business problem, will understand why evaluation is part of the application lifecycle.

Evaluation is facilitated with two primary components: information about the application and a try-before-you-buy capability. Many questions about an application can be answered efficiently with basic feature and function information, and ideally a competitive comparison from several similar applications will give visibility to their strengths and weaknesses. But these are prerequisites rather than substitutes for actually trying and using the product. Ideally, a "test drive" will not require any setup or configuration, since the goal is only to determine whether it meets your needs. You want to spend your evaluation time using the software, not learning how to deploy and configure it.

2. Deployment
Deployment is the tip of the system management iceberg - it is the most visible procedure because you cannot even get started without it.

Automating a deployment has many benefits, even if it is superficially a one-time deployment, because the automation script provides documentation and a kind of checklist to ensure that configuration details are handled properly the next time. If the upgrade is performed by re-deploying to a new server entirely, (this is much easier with virtual machines and cloud servers), then the upgrade process is just a matter of re-running the automation.

Another benefit of automating deployments is that best practices are made repeatable and documented, thereby reducing the chance of human error.

3. Backup
As soon as you begin to use your application, you should begin backing up the data it stores in a location that is both physically and logically separate from the primary data store.

Ideally, a backup contains the minimum unique data necessary to reproduce the state of the system. This keeps the cost of transporting and storing the backups low, which in turn encourages a higher backup frequency.  However, sometimes this minimization should be traded off against the amount of time required to restore the system to working order.

4. Monitoring
Applications and servers fail or bog down unpredictably. Persistent automated monitoring, with appropriate forms of notification (email, text message) frees you from having to explicitly check on the status of the application, but still ensures that you hear about problems when they happen, rather than when they are reported by users hours later.

Importantly, applications must be monitored at the application level - by robotic access through the application itself. It is common for servers and virtual machines to seem perfectly fine while the application is unresponsive. Remember that users and customers do not care about "server uptime" - they just want to use the application or site.

Deeper monitoring can signal trends that suggest that an imminent failure before it happens. For example, by tracking memory utilization and number of web server processes, a monitoring system may be able to predict that a server is about to overload. This type of deeper monitoring can also be useful for automated scaling procedures.

5. Job Scheduling
Many applications have scheduled jobs in addition to monitoring and backups: data rollups, log file archiving, end-of-day reporting.

If the application has this requirement, there must be an easy, flexible, and reliable method of scheduling and automatically performing these jobs. It is common to use cron or Windows Task Scheduler for these procedures, and as long as these tools are accessible this is a workable solution. Even better is an off-server job scheduling mechanism, so that the status of the server and application does not affect whether the job runs and whether failure notifications can be delivered.

6. Upgrades
Most application software and its supporting technology stack are subject to occasional version upgrades and patches.

It is extremely convenient to be able to easily duplicate the entire application environment and perform the upgrade first on a copy. Running manual or automated tests to confirm that the upgrade worked can improve reliability. If the upgrade failed, because (for example) a step was left out or a configuration change conflicts with the new version, the duplicate environment can be used to check and repair these issues and the upgrade process repeated until it works properly. This best practice minimizes the downtime associated with the upgrade.

7. Recovery
Many environments assume that backups will only rarely be used, so accessing them is expensive and possibly time-consuming. In an IaaS environment, with the right tools, it can be relatively easy to retrieve and restore backups to either a production system or to a copy.

Obviously, when a server or application does fail, the first thing to try is to restore the operation of the application in place.  The next thing to try is deploying a new application environment, then restoring a backup or turning a replication slave into the master. The former will result in a loss of data based on how long ago the backup was performed. The latter will typically result in only the very last transaction being lost.  DNS entries must be updated.

Sometimes, a server failure is actually a consequence of an entire data center experiencing downtime.  In this case, it becomes clear why the backups must be kept offsite. The attempt to deploy a new application will fail in the original data center, so it must be performed elsewhere.

Ideally, a management system will provide the optional ability to sequence and automate all these procedures in connection with the monitoring. This can minimize downtime and avoid the need to have staff on call 24x7.

8. Scaling
The cost of frequently changing resources to match load must be weighed against the cost of having excess resources for some time. Burst scaling is much less common and substantially more challenging to handle well.

In single server application deployments, scaling consists of redeploying the application on a server with more memory and/or compute resources. Multi-server deployments are scaled by adding or removing servers from a homogeneous horizontally scalable tier, usually a web tier and possibly a separate application server tier.

In addition to deploying fully configured web or application servers, they must be properly added to (or removed from) a load balancer queue, and this must be done in a way that does not affect active connections. Thus, whether these scale changes are initiated manually or dynamically in response to monitoring output, it is crucial that the deployment (or un-deployment) of resources be automated to avoid configuration errors and to ensure a transparent user experience on the production environment.

9. Tuning
Sometimes application deployments can be tuned to perform better independent of resource scaling.  Typically this involves changing configuration parameters and restarting the web server or rebooting the server.

If system management for the application is largely automated, any manual changes need to be reflected in the automated deployment procedures to ensure that they are reflected in later re-deployments (including restoring backups, deploy from scratch upgrades, and the like). A very sophisticated management system might actually perform tuning automatically based on load and performance characteristics of the application. However, this is unusual because it is typically very application-specific.

10. Utility Management
Many application deployments include utility software that provides, for example, security, log analysis, caching, or email delivery. These utilities are often more challenging to install even than the technology stack or the application itself, and configuring them to connect to the application is almost always tricky. Consequently, a compatibility matrix along with automated deployment procedures to allow independent installation of each utility is an enormous time-saver. Automated removal of these utilities is also crucial, as it can be even more difficult than installation.

We have seen that there are numerous system management activities to be performed in a typical web application deployment. Accomplishing these tasks manually is relatively burdensome and requires a fair amount of skill. In the Infrastructure-as-a-Service world, most of these procedures can be automated or automated with manual initiation; and, further, they can be performed in ways that are more reliable and testable than in a bare-iron data center. With an appropriate IT Process Automation system, a single-tenant application deployment in the cloud can be almost as easy as Software-as-a-Service, but without the attendant loss of control and flexibility.

More Stories By Dave Jilk

Dave Jilk has an extensive business and technical background in both the software industry and the Internet. He currently serves as CEO of Standing Cloud, Inc., a Boulder-based provider of cloud-based application management solutions that he cofounded in 2009.

Dave is a serial software entrepreneur who also founded Wideforce Systems, a service similar to and pre-dating Amazon Mechanical Turk; and eCortex, a University of Colorado licensee that builds neural network brain models for defense and intelligence research programs. He was also CEO of Xaffire, Inc., a developer of web application management software; an Associate Partner at SOFTBANK Venture Capital (now Mobius); and CEO of GO Software, Inc.

Dave earned a Bachelor of Science degree in Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil...
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Established in 1998, Calsoft is a leading software product engineering Services Company specializing in Storage, Networking, Virtualization and Cloud business verticals. Calsoft provides End-to-End Product Development, Quality Assurance Sustenance, Solution Engineering and Professional Services expertise to assist customers in achieving their product development and business goals. The company's deep domain knowledge of Storage, Virtualization, Networking and Cloud verticals helps in delivering ...
Intelligent machines are here. Robots, self-driving cars, drones, bots and many IoT devices are becoming smarter with Machine Learning. In her session at @ThingsExpo, Sudha Jamthe, CEO of IoTDisruptions.com, will discuss the next wave of business disruption at the junction of IoT and AI, impacting many industries and set to change our lives, work and world as we know it.
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availabil...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
The Quantified Economy represents the total global addressable market (TAM) for IoT that, according to a recent IDC report, will grow to an unprecedented $1.3 trillion by 2019. With this the third wave of the Internet-global proliferation of connected devices, appliances and sensors is poised to take off in 2016. In his session at @ThingsExpo, David McLauchlan, CEO and co-founder of Buddy Platform, discussed how the ability to access and analyze the massive volume of streaming data from millio...
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
“Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. CloudBerry Backup is a leading cross-platform cloud backup and disaster recovery solution integrated with major public cloud services, such as Amazon Web Services, Microsoft Azure and Google Cloud Platform.
According to Forrester Research, every business will become either a digital predator or digital prey by 2020. To avoid demise, organizations must rapidly create new sources of value in their end-to-end customer experiences. True digital predators also must break down information and process silos and extend digital transformation initiatives to empower employees with the digital resources needed to win, serve, and retain customers.
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
SYS-CON Events announced today that Transparent Cloud Computing (T-Cloud) Consortium will exhibit at the 19th International Cloud Expo®, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The Transparent Cloud Computing Consortium (T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data proces...