Click here to close now.


Microsoft Cloud Authors: Jayaram Krishnaswamy, Elizabeth White, Andreas Grabner, Jim Kaskade, Pat Romanski

Blog Feed Post

5 reasons your website might slow down this holiday season (or anytime)

The National Retail Federation (NRF) predicts that this year’s holiday sales will increase 4.1 percent to $586.1 billion. But here’s a wrinkle in the data that nobody really records: The companies that are making money are those that have fast, responsive websites. Companies with slow websites won’t be cashing in this season.

In fact, a Kissmetrics report on shopping cart abandonment found that 40 percent of people abandon a website that takes more than three seconds to load, and a less forgiving group of almost 50 percent of users expect a website to load in two seconds or less. This is just the latest in a slew of similar studies that have been produced since the dawn of the e-commerce era that concludes website performance has a direct correlation to revenue performance.

So what can you do to ensure your web pages load in two seconds or less? Avoid the following faux pas. These are the most common problems we see that slow e-commerce sites down to the point of depressing sales.

1. Unforeseen traffic spikes. Heavy traffic is one of the most obvious reasons a website slows down, and most IT departments provision for this. But what if IT doesn’t know what’s coming, or when it’s coming? A surge of users to a site for a specific reason that IT doesn’t know about is a big risk that is easily preventable.

Historically, there’s always been delineation between IT and marketing. To help bridge this gap, many organizations have hired a chief web officer (CWO), who oversees an organization’s web presence, including all Internet and intranet traffic. The CWO helps communicate marketing’s website performance needs to the IT department in enough time for them to prepare for any big promotional events.

As soon as marketing suspects that the website might receive heavier-than-normal traffic, IT and marketing should start working together on a schedule that will help avoid any last minute problems. The most important thing marketing should be communicating is how many users they are expecting and how long they expect core page load to take.

While not all situations are the same, don’t despair if your website goes down during a time when you expected to rake in huge online sales. There are a few things you can do to remedy the situation. A common strategy is to throw more bandwidth or more CPU at the site to resolve the issues — but it’ll cost you. Before doing this, organizations should conduct a quick cost-benefit analysis.

A business with an overloaded site will need to decide if the revenue they will bring in from their site staying up will break even with or surpass the amount they put into extra bandwidth or CPU.

2. Inadequate infrastructure and code base measurement and testing. This problem can be avoided during the software development lifecycle by using tools that realistically measure your website’s performance from an external perspective, as well as having benchmarks associated with testing. During the software development lifecycle, the following factors affect your site’s speed:

  • Where the infrastructure is located geographically. If you’re selling to the Asian market but planning to host your infrastructure in Amazon East, you’re going to experience latency delays right off the bat.
  • Whether to cache or use CDNs. There is a subtle difference between the two, but front-end caching will help you avoid taxing your web servers, something that will cause your website to slow down. Front-end caching allows the cache version of the data to sit right in front of the web server and can be done relatively inexpensively with freeware technology. CDNs will come at a more significant cost, but will ensure localized delivery of content, saving you the latency that networks might provide.
  • Image size. If the graphics on your site are not optimized and efficient, the page will take longer to download. You need a way to analyze graphical development throughout your site, find those that are suboptimal, and redeploy them.
  • Whether you are using standalone or shared hosting environments. Standalone services allow for improved control and understanding of your environment and performance. A shared environment is like an apartment complex — you don’t know much about your neighbors or how their application/environment could be affecting your performance. While shared environments might be cheaper in the short-term, they could very well cost more over time.
  • Whether you are using virtualized instances or traditional servers. Depending on the application requirements, virtualized instances could be more convenient for deployment and backup purposes. However, they could cause performance issues. As a result, evaluate the overhead associated with your application on a virtualized instance versus a non-virtualized environment.
  • What type of database you chose. Whether it’s MySQL or Cassandra, SQL vs. NoSQL, we repeatedly see underutilized or misconfigured setups that cause performance significant issues. Additionally, we see organizations make interesting database solution selections that don’t take into consideration real benefits. Often a database is chosen based solely on the available in-house or outsourced expertise rather than the actual needs of the application.
  • What type of OS you chose. Costs and technical expertise are the two most common drivers behind operating system architecture and design. But the success of the OS ultimately comes down to optimization. Fine-tuning can be performed according to best practices; however, running a load test against your environment will allow you to truly optimize it.
  • If this site will be hosted in your own data center, co-located, or in a cloud hosting environment. Many organizations today begin by hosting their application in the cloud for rapid deployment, short term wins, and proof of concept to investors. As the application grows or the user base increases, organizations often will consider and migrate to their own data center or at least out of the cloud. There are appealing solutions today that allow for applications to continue to scale in an effort to mimic many popular cloud environments. Regardless of the environment, it’s imperative to learn your performance numbers and ensure that you meet or exceed performance metrics as you migrate.

3. Lack of maintenance. Conducting incremental performance tests with each new update or change to your environment might sound like a lot of extra work for your IT department. But, there are several subtle efficiencies you can perform that solve multiple problems. Spriting, for example, combines multiple images or CSS files.

You can continue to tweak your environment by optimizing your code with each update of the site. Implementing cache management will regulate which and how many objects to keep in memory. Regular patch management maintenance can prevent memory leaks within the code base that cause slowness.

Most organizations find that their maintenance works best on a regular schedule, and is performed whether the environment has changed or not. Microsoft, for example, has Patch Tuesday. Every Tuesday is dedicated to making sure their apps are updated with the latest and greatest patches, as well as reviewing the code base to figure out how to best optimize as environments change.

4. Inability to scale. A lot of organizations will develop sites that are not built to scale to the level they need, even though this is such a fundamental component of the software development lifecycle. We talk to a lot of web developers whose strategy is to simply buy more resources — hardware/software, bandwidth, CPU, memory, servers, etc. — than they need, and then assume that the extra will help them handle any heavy traffic that comes down the pike.

A more practical strategy (that will also save you money) is to take the time to develop an adaptive environment that you know can scale. Again, the sure-fire way to avoid this is to test and test often, so that you know every part of the stack can scale. And I mean to test everything — the front and back end web servers, databases, and application servers.

5. Quality measurements. Some IT teams are afraid to shine a light on their own work for fear of exposing errors they might have made during the development process. This is a common internal, political problem for most organizations.

The bottom line is that if the website is slow, revenue is lost, so it needs to be confronted. If an IT team finds errors in its website after it goes live, they are often hesitant to draw attention to it right away, or even at all.

It’s important to say that I don’t believe internal IT teams can’t detect errors or are incapable of fixing them. All I’m saying is that our customers are often relieved to receive help from a third party that will objectively identify errors and are guaranteed to have the time and resources to fix them.

How much of this holiday season’s expected $586 billion will you be generating? Hopefully, a lot. Especially if you take the time now to pay attention to your website’s performance and do what it takes to make sure your customers get the best experience. Yes, the competition for customers will be fierce, but sticking to these five simple tips will keep your website up and running through January.

Read the original blog entry...

More Stories By Sven Hammar

Sven Hammar is Co-Founder and CEO of Apica. In 2005, he had the vision of starting a new SaaS company focused on application testing and performance. Today, that concept is Apica, the third IT company I’ve helped found in my career.

Before Apica, he co-founded and launched Celo Commuication, a security company built around PKI (e-ID) solutions. He served as CEO for three years and helped grow the company from five people to 85 people in two years. Right before co-founding Apica, he served as the Vice President of Marketing Bank and Finance at the security company Gemplus (GEMP).

Sven received his masters of science in industrial economics from the Institute of Technology (LitH) at Linköping University. When not working, you can find Sven golfing, working out, or with family and friends.

@ThingsExpo Stories
Electric power utilities face relentless pressure on their financial performance, and reducing distribution grid losses is one of the last untapped opportunities to meet their business goals. Combining IoT-enabled sensors and cloud-based data analytics, utilities now are able to find, quantify and reduce losses faster – and with a smaller IT footprint. Solutions exist using Internet-enabled sensors deployed temporarily at strategic locations within the distribution grid to measure actual line loads.
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
There will be 20 billion IoT devices connected to the Internet soon. What if we could control these devices with our voice, mind, or gestures? What if we could teach these devices how to talk to each other? What if these devices could learn how to interact with us (and each other) to make our lives better? What if Jarvis was real? How can I gain these super powers? In his session at 17th Cloud Expo, Chris Matthieu, co-founder and CTO of Octoblu, will show you!
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
SYS-CON Events announced today that Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will keynote at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
The IoT market is on track to hit $7.1 trillion in 2020. The reality is that only a handful of companies are ready for this massive demand. There are a lot of barriers, paint points, traps, and hidden roadblocks. How can we deal with these issues and challenges? The paradigm has changed. Old-style ad-hoc trial-and-error ways will certainly lead you to the dead end. What is mandatory is an overarching and adaptive approach to effectively handle the rapid changes and exponential growth.
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete end-to-end walkthrough of the analysis from start to finish. Participants will also be given the pract...
WebRTC: together these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at WebRTC Summit, Cary Bran, VP of Innovation and New Ventures at Plantronics and PLT Labs, will provide an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it may enable, complement or entirely transform.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.