|By Jyoti Bansal||
|November 15, 2012 08:15 AM EST||
Deploying APM in the Enterprise. In the last installment we covered how you find, test, and justify purchasing an APM solution. This blog will focus on what to do after you’ve made a purchase and started down the path of deploying your coveted APM tool (ahem, ahem, AppDynamics, ahem). Just clearing my throat, let’s jump right in…
Welcome to Part 4 of my series It’s time for a celebration, time to break out the champagne, time to spike the football and do your end zone dance (easy there Michael Jackson, don’t hurt yourself). All of the hours you spent turning data into meaningful information, dealing with software vendors, writing requirements, testing solutions, documenting your findings, writing business justifications, and generally bending over backwards to ensure that no objection would stand in your way has culminated in management approving your purchase of APM software. Now the real work begins…
The 7 Ps
A co-worker of mine shared some words of wisdom with me a long time ago which have served me well over the years. It’s a little saying called the 7 P’s and goes something like this… Piss Poor Planning Promotes Piss Poor Performance. Deploying and using APM software is not a time for spontaneity or just winging it. If you want to make mistakes and derive little value from the investment you just put your reputation behind then by all mean just jump in with little or no planning. If you want to be a rockstar you need a solid plan for deploying, configuring, verifying, operationalizing, using, and evangelizing your APM tool (ahem, ahem, AppDynamics, ahem). Just clearing my throat again, I think there’s a bug going around ;-P
This blog post is a great general outline for planning your implementation. Everything covered in this post should be part of your planning process and should be considered the bare minimum for APM deployment planning within your organization.
The planning stage is a perfect time to ask your APM vendor for documentation on best practices related to deploying their software. Your vendor (AppDynamics, wink) has seen their software deployed in many situations across many industry verticals. They will have important advice for you on how to make the deployment and operation of their product as successful as possible. Use your vendors depth and breadth of information to your advantage, you’re paying them so it’s the least they can do.
Controller: The Brain, Narf!
The first major decision will be an easy one. You probably already covered this during the evaluation, vendor selection, and negotiation phases but we will recap here. You need to decide if you will host your own controller or use the vendors SaaS environment. In case you don’t already know, a controller is the server component that collects, stores, analyzes, etc… the monitoring data from the agents. Basically the controller is the brains behind the operation. There are many factors that you need to consider when deciding to use a SaaS or On-Premise model and we will not cover them in this post. Your vendor of choice (ahem, ahem, AppDynamics, ahem) will help you decide which option is right for your business circumstances.
Easy peasy lemon squeazy! I have just embedded those words in your head for potentially days, weeks or years to come. Sorry about that but it really describes the SaaS option well. You don’t have to get a server racked, VM allocated, disk space configured, solve a Rubiks Cube in 3 minutes or less, or whatever other convoluted deployment process your company may have in order to host your own software. All you really need to do is point your agents at the SaaS controller and you are off and running. Your chosen APM vendor (AppDynamics of course) will handle the server sizing, capacity, maintenance, etc… for you. Nice!
So you’ve decided to host your own controller(s). We have many clients that choose this route for one reason or another and we make every effort to support you just as well as using the SaaS option. In this case we wont be doing all of the work for you so you need to get cracking on your server deployment process. I hope it’s super easy and streamlined and you can have a new host set up and ready to load software in an hour or less. In reality it may take you a few weeks or even months so you need to be familiar with the lead time so that you can appropriately plan the rest of the deployment. You NEED a controller so there is no point in deploying agents without one. Use this lead time to generate the most awesome plan ever!
Agents: Deploying and Configuring
Agents need applications to do anything meaningful so it’s a requirement that you figure out what applications you want (or will be allowed) to monitor. You most likely had at least one problematic yet important application in mind when you started your search for an APM tool. Create a list of the applications that need monitoring and prioritize that list. I personally prefer creating a top 10 list (you could also call it a “next 10” list) that is an equal mix of application I suspect will be difficult to instrument as well as applications I think will be really easy. I do this because you usually don’t work at deploying agents to application components in a serial manner. It’s typically a parallel process where you can jump from one deployment to the next while you are waiting for approvals, personnel, or anything else that gets in your way of doing actual work.
Deploying APM agents should be easy. Add a very small amount of software to the server you want to monitor, reference the agent software in your application configuration and restart your application. It’s basically that easy to deploy an agent. It should also be really easy to configure. In fact, the agent should automatically detect what it needs to monitor and simply just work. This is how AppDynamics works but the same does not hold true for most other APM vendors. Hopefully you saw this when you ran each vendor solution through your POC environment. In the interest of full disclosure I will admit that there are circumstances where NO APM solution can automatically detect your application properly and there is more configuration work to do. This is a problem that every APM vendor has to deal with but thankfully AppDynamics sees this condition with only a very small subset of its customer base. Usually you plug in our agent and we show you what you need to see. It just works!
Awesome, now that we just saved you 80% of the configuration time versus deploying “the other guys” what’s next?
After you deploy agents (whether it be straight to production or advancing though pre-production environments) and you have used the monitored application a bit, you want to look at the user interface to see if the information contained within looks correct.
- Look at your application flow map to see if you are missing any application components.
- Check the business transactions to see if the expected transactions are there and reporting metrics.
- Do you have end user experience metrics showing up?
- Do you have transaction snapshots showing your custom code executing in the run time?
- Send out test alerts to see if they make it to their destination. (Alerting is important so we will cover it in another blog post)
If things don’t look right you need to figure out why. It might be that your application really is different than you thought (we see this quite often), or it could be a problem with the monitoring. Resolve any issues you see before declaring deployment and configuration victory.
Production Load Cannot Be Simulated Exactly!!!
To realize the most value from your APM purchase you MUST run it in production. No matter how good your Quality Engineering team is they cannot code all of the crazy things your users will try to do in production. It can also be very difficult to duplicate your application environment in production. Example, you have 5000 JVMs spread across multiple cloud provider data centers. Replicating that environment would be time consuming and really expensive.
Beyond the technology aspects of running in production you also need to consider your existing processes. Your shiny new APM tool will provide incredible insight into application issues as long as you have it integrated into your processes. Here are some points to consider:
- Are alerts configured so that they are routed to the proper people?
- Does the operations center know about the new alerts that will be coming from your new APM product?
- Is there a process that application owners can follow to request monitoring by your new tool?
- Is there a process to smoothly and efficiently on-board a new application?
- Is the APM tool integrated with other corporate systems? (LDAP, Events Aggregations, Business Intelligence, etc…)
What I am trying to say is; Give your company every opportunity to use the hell out of your new tool!
Teach Them Well
Educate and evangelize, this will pay dividends ten fold.
Create a short training curriculum for anyone who will need to work with your APM tool. You should have training material for basic usage, advanced concepts (memory leaks, policies, dashboard creation, etc…), and operations (alerts/events) training. You need to make sure the people who will touch the product or consume the data have the information they need to be successful. Their success drives your success.
Tell everyone you can about the success you are having with your new tool. Don’t be annoying to the point where people run the other way when they see you coming but make sure they know what you are working on and how much of an impact it is having on the business.
For every problem you solve with your new APM tool take 30 minutes to put together a 3–5 slide presentation. Include the following information on each presentation you create:
- Problem Description: Describe the application, problem, and impact level.
- Resolution: Describe the resolution steps and root cause. Use screenshots from your APM tool.
- Business Impact: Describe how long it took to resolve the issue, how long it normally takes without APM, and quantify the impact to the business of this outage for both scenarios (with and without APM).
These short presentations will equip you with the information you need to defend your decision to purchase APM, justify a larger investment, and propel yourself to rockstar status within your organization.
There is a lot of work that needs to be done to successfully deploy, configure and use an APM tool in the enterprise but the potential rewards are staggering. Think about how much lost revenue can be avoided by ensuring your revenue generating applications don’t go down at peak times. People notice when the decisions you make and the work you do directly impact the bottom line. Put in the effort and get noticed!
Join me next week for the next installment in this series. It will be a blog post dedicated to alerts, yes they are that important.
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Jan. 20, 2017 10:45 AM EST Reads: 3,687
WebRTC sits at the intersection between VoIP and the Web. As such, it poses some interesting challenges for those developing services on top of it, but also for those who need to test and monitor these services. In his session at WebRTC Summit, Tsahi Levent-Levi, co-founder of testRTC, reviewed the various challenges posed by WebRTC when it comes to testing and monitoring and on ways to overcome them.
Jan. 20, 2017 10:45 AM EST Reads: 6,067
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Jan. 20, 2017 09:45 AM EST Reads: 2,911
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, introduced the technologies required for implementing these idea...
Jan. 20, 2017 08:30 AM EST Reads: 4,737
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
Jan. 20, 2017 08:15 AM EST Reads: 4,915
"A lot of times people will come to us and have a very diverse set of requirements or very customized need and we'll help them to implement it in a fashion that you can't just buy off of the shelf," explained Nick Rose, CTO of Enzu, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 20, 2017 08:15 AM EST Reads: 4,704
The WebRTC Summit New York, to be held June 6-8, 2017, at the Javits Center in New York City, NY, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 20th International Cloud Expo and @ThingsExpo. WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web co...
Jan. 20, 2017 07:15 AM EST Reads: 2,970
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Jan. 20, 2017 07:00 AM EST Reads: 9,071
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.
Jan. 20, 2017 07:00 AM EST Reads: 6,860
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
Jan. 20, 2017 03:00 AM EST Reads: 895
WebRTC is about the data channel as much as about video and audio conferencing. However, basically all commercial WebRTC applications have been built with a focus on audio and video. The handling of “data” has been limited to text chat and file download – all other data sharing seems to end with screensharing. What is holding back a more intensive use of peer-to-peer data? In her session at @ThingsExpo, Dr Silvia Pfeiffer, WebRTC Applications Team Lead at National ICT Australia, looked at differ...
Jan. 20, 2017 02:30 AM EST Reads: 5,032
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
Jan. 20, 2017 02:00 AM EST Reads: 6,577
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now ...
Jan. 20, 2017 01:45 AM EST Reads: 4,269
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, discussed the impact of technology on identity. Sho...
Jan. 20, 2017 12:45 AM EST Reads: 4,130
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Jan. 20, 2017 12:45 AM EST Reads: 2,858
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
Jan. 20, 2017 12:00 AM EST Reads: 6,350
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Jan. 19, 2017 09:45 PM EST Reads: 6,832
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Jan. 19, 2017 09:45 PM EST Reads: 7,723
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus o...
Jan. 19, 2017 07:30 PM EST Reads: 4,247
Providing secure, mobile access to sensitive data sets is a critical element in realizing the full potential of cloud computing. However, large data caches remain inaccessible to edge devices for reasons of security, size, format or limited viewing capabilities. Medical imaging, computer aided design and seismic interpretation are just a few examples of industries facing this challenge. Rather than fighting for incremental gains by pulling these datasets to edge devices, we need to embrace the i...
Jan. 19, 2017 05:30 PM EST Reads: 3,662