Welcome!

Microsoft Cloud Authors: David H Deans, Yeshim Deniz, Janakiram MSV, Andreas Grabner, Stackify Blog

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

Whitehorse

The Microsoft Dynamic Systems Initiative and Enterprise Architecture

With Whitehorse, Microsoft has placed a significant stake in the ground when it comes to modeling enterprise services. While Whitehorse is part of the not-yet-released Visual Studio 2005 (codenamed "Whidbey"), Microsoft has publicly discussed and demonstrated significant elements of Whitehorse, and alpha code is currently in use by select Microsoft customers. This article will discuss key Whitehorse concepts and capabilities that have been previously announced, and will put them into context with information on key forces, such as the Microsoft Dynamic Systems Initiative and the emerging discipline of enterprise architecture, that are driving enterprises to embrace modeling as they move towards service-oriented architecture (SOA)–based development.

What Is Whitehorse?
To quote from the Microsoft Whitehorse FAQ (whitehorsefaq.aspx), Whitehorse "is a feature of Visual Studio Whidbey that simplifies the architecture, design, and development of applications comprised of distributed services. The service-oriented application designer consists of a number of tools, including the distributed services designer, which enables architects to design their application architecture visually. Developers can work with code generated from this tool and keep code changes synchronized with the visual design. Additionally, the logical system architecture designer allows infrastructure architects to visually model the data center, map the application to the data center, and validate it against the constraints of the application/data center prior to actual deployment. Reports generated from this help document the deployment mapping."

Microsoft is making three key points in this statement; let's inspect each one in turn to understand its relevance to enterprise development and operations teams. First, Whitehorse introduces a distributed services designer, that enables architects to design their application architecture visually. Why is this significant? With the huge industry trend towards Web services and SOAs, including Microsoft's currently available Web Services Enhancements (WSE) and future Indigo capabilities, the Visual Studio team has recognized the importance of visually managing service definition and development through a modeling environment tightly linked with underlying code generation.

Second, Whitehorse provides a logical system architecture designer that allows infrastructure architects to visually model the data center. While tools exist to manage IT assets (servers, network hardware, and the like), Whitehorse enables operations team members to document the existence of these assets and to describe their operational characteristics, such as security settings, deployable component types allowed, etc.

Finally, Whitehorse allows application designers to validate application architectures against targeted data center deployments. While other Whitehorse features are important, this one is perhaps the most significant in the application development/deployment life cycle. By allowing developers to validate their application structure during the design phase, Whitehorse helps development and operations staffs effectively communicate with each other, avoiding the "big oops" that often occurs when a developed application that has been fully tested in the development/QA environment cannot be deployed into the operational environment or performs so poorly (e.g., speed, application stability) that its deployment is impractical.

DSI and Enterprise Architecture
Some of you may have heard of Microsoft's Dynamic Systems Initiative (DSI) and may also be familiar with the emerging field of enterprise architecture. Where does Whitehorse fit within these concepts? Whitehorse is one of the primary tools within Microsoft's Distributed Systems Initiative and, as such, will be one of the key ways that DSI is exposed to the enterprise developer and operations communities. DSI, in turn, expresses some key enterprise architecture concepts in concrete terms for Windows- and .NET-based application architectures. Let's explore DSI and enterprise architecture to provide a context for the value of Whitehorse within an enterprise IT environment.

Dynamic Systems Initiative and the System Definition Model
Quoting again from Microsoft's Whitehorse FAQ, the Microsoft Dynamic Systems Initiative "is a broad Microsoft and industry initiative uniting hardware, software and service vendors around a new software architecture based on the System Definition Model (SDM). This new architecture is becoming the focal point for how we are making product investments to dramatically simplify and automate how our customers will develop, deploy, and operate applications and IT infrastructure. The System Definition Model (SDM) is a live Extensible Markup Language (XML) blueprint that spans the IT life cycle and unifies IT operational policies with the operational requirements of applications. It is relevant at both design time and at run time. At design time, it will be exposed through Visual Studio to enable IT operators to capture their policies in software and developers to describe application operational requirements. At deployment time, the SDM description of the application will enable the operating system to automatically deploy the complete application and dynamically allocate a set of distributed server, storage, and networking resources that the application requires."

In essence, the SDM provides a single consolidation point that gives application developers a precise and concise way to document the operational needs of a distributed system. Once documented in this way, an SDM instance can be used to communicate those operational needs to the IT staff responsible for defining and maintaining the organization's operational IT infrastructure. Tools such as Whitehorse will be used both to generate SDM document instances, which represent a distributed application's operational requirements, and to validate those requirements against a candidate operational deployment topology.

Enterprise Architecture
The bridging of application to operational needs is one aspect of an emerging IT discipline called enterprise architecture. One definition states that enterprise architecture "provides, on various architecture abstraction levels, a coherent set of models, principles, guidelines, and policies, used for the translation, alignment, and evolution of the systems that exist within the scope and context of an Enterprise." (www.geao.org/aboutea/definition.jsp)

A concrete example of an enterprise architecture is the Federal Enterprise Architecture (FEA), an initiative driven by the federal government to support and encourage cross-agency collaboration, transformation, and government-wide productivity improvements. While many details of the FEA are relevant only to government activities, the FEA also lays out a useful architectural structure that is increasingly being used by other enterprises to scope and manage their architectural activities. The FEA is composed of the following five layers:

  • Performance Reference Model (PRM): Framework to measure the performance of major IT investments and their contribution to program performance
  • Business Reference Model (BRM): Function-driven framework for describing the business operations of the federal government, independent of the agencies that perform them
  • Service Component Reference Model (SRM): Business- and performance-driven, functional framework that classifies service components with respect to how they support business and/or performance objectives
  • Data and Information Reference Model (DRM): Model that describes, at an aggregate level, the data and information that support program and business line operations
  • Technical Reference Model (TRM): Component-driven, technical framework used to identify the standards, specifications, and technologies that support and enable the delivery of service components and capabilities
When applied to application development, this architectural framework can be used to describe the entire application project life cycle, from identifying the initial business need (PRM) to specifying the new processes and functions (and modifications to existing processes and functions) required to meet that business need (BRM), to defining the functional services and data elements that must be implemented to support the application meeting the business need (SRM and DRM), to specifying how the services and application components consuming those services will be deployed on the organization's IT infrastructure (TRM). Whitehorse and the DSI fit directly within this architectural framework, addressing the SRM and TRM layers and, more specifically, the definition and binding of application services specified as part of an enterprise's SRM to server and network infrastructure specified by the enterprise's TRM.

You may now be asking yourself, "Why are all of these models necessary? Why can't I just create services and deploy them as I need them?" While development tools, such as Whitehorse, will certainly make it much easier to create and deploy services, it's important to keep sight of the big picture. Managing your services to prevent functional redundancy and to make sure that you are building the right services at the right time is a big part of what an enterprise architecture is designed to do. In fact, most enterprise architects will recommend that a repository be used to manage and make searchable an organization's business processes, service definitions, and deployed services instances and their interrelationships. That said, incremental definition, development, and deployment of business processes and their supporting services within an enterprise architecture is clearly the way to make progress in moving from the "what is" state to the "what should be" state, as specified by the architecture. Whitehorse gives developers a highly effective toolset to do just that, selecting existing and building new services that are subsequently combined into applications designed to support new and changing business processes.

A Whitehorse Working Scenario
The remainder of this article will show how Whitehorse can be used to enable application design and deployment to a targeted operational infrastructure in three steps:

  • Application design
  • Operational infrastructure design
  • Application validation against operational infrastructure
Application Design
We have decided that our example application, a typical e-commerce site, will be built using the Microsoft Enterprise Solution Pattern, "Three-Layered Services Application" (Click Here !), as an architectural guideline. This pattern specifies that applications should be composed of three layers: UI Components, which present business functionality to application users; Service Interfaces, which expose consolidated business functionality (driven perhaps from SRM definitions extracted from our enterprise architecture); and Data Access Components, which encapsulate and present data managed by relational databases. Whitehorse allows us to define our components and services, retrieving and importing existing service definitions where they exist and specifying new service definitions as needed. We can then use the Whitehorse visual modeling surface to wire our application services and components together, as shown in Figure 1. (Figures in this article were extracted from the Microsoft DSI Overview whitepaper available at Click Here !.).

Operational Infrastructure Design
In parallel with our application design efforts, our IT operations staff, using the principles defined by the Microsoft Enterprise Solution Pattern, "Deployment Plan," defines the operational (i.e., data center) server/network topology against which this application (and all other enterprise applications) must be deployed.

Application Validation
Once we have the data center structure in hand, we can apply our designed application services and components to the operational server/network topology, using Whitehorse's drag-and-drop capabilities. This activity can be described as a four-step process:

  • Import operational topology definitions into our Whitehorse project
  • Drag and drop our application components and services onto server instances defined by the imported operational topology
  • Validate the compatibility between application elements and server instances
  • Reconcile incompatibilities by modifying application component and service requirements and/or data center topology definitions
Some example constraints that might be flagged by Whitehorse (and that we will subsequently need to resolve) include:
  • Only anonymous access will be supported on a specific server, so we can't deploy an application or service that requires user authentication on that server.
  • A server is configured to prevent Web services from running, so we can't deploy a Web service on that server.
Once we have successfully reconciled all incompatibilities, we can continue with our application implementation and deployment with the knowledge that our application is designed to be compatible with our organization's operational infrastructure.

Summary
Whitehorse is a major advance in .NET development tooling, designed to both enable application developers to rapidly define applications and their constituent services and components, and to enable development and operations staff to better communicate systems-oriented requirements and infrastructure dependencies early in the application development life cycle. For more information about Whitehorse and other topics discussed in this article, see the following sites:

  • MSDN TV session on Whitehorse: TV Session
  • Dynamic Systems Initiative: www.microsoft.com/windowsserversystem/dsi/dsioverview.mspx
  • Enterprise architecture: www.eacommunity.com,
    www.geao.org
  • Federal Enterprise Architecture: www.feapmo.gov.
  • More Stories By Brent Carlson

    Brent Carlson is vice president of technology and cofounder of LogicLibrary, a provider of software development asset (SDA) management tools. He is the coauthor of two books: San Francisco Design Patterns: Blueprints for Business Software (with James Carey and Tim Graser) and Framework Process Patterns: Lessons Learned Developing Application Frameworks (with James Carey). He also holds 16 software patents, with eight more currently under evaluation.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @ThingsExpo Stories
    SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
    21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
    SYS-CON Events announced today that Secure Channels, a cybersecurity firm, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Secure Channels, Inc. offers several products and solutions to its many clients, helping them protect critical data from being compromised and access to computer networks from the unauthorized. The company develops comprehensive data encryption security strategie...
    In his session at @ThingsExpo, Sudarshan Krishnamurthi, a Senior Manager, Business Strategy, at Cisco Systems, discussed how IT and operational technology (OT) work together, as opposed to being in separate siloes as once was traditional. Attendees learned how to fully leverage the power of IoT in their organization by bringing the two sides together and bridging the communication gap. He also looked at what good leadership must entail in order to accomplish this, and how IT managers can be the ...
    Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
    There is only one world-class Cloud event on earth, and that is Cloud Expo – which returns to Silicon Valley for the 21st Cloud Expo at the Santa Clara Convention Center, October 31 - November 2, 2017. Every Global 2000 enterprise in the world is now integrating cloud computing in some form into its IT development and operations. Midsize and small businesses are also migrating to the cloud in increasing numbers. Companies are each developing their unique mix of cloud technologies and service...
    When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
    SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
    IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
    To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
    Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
    Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
    Consumers increasingly expect their electronic "things" to be connected to smart phones, tablets and the Internet. When that thing happens to be a medical device, the risks and benefits of connectivity must be carefully weighed. Once the decision is made that connecting the device is beneficial, medical device manufacturers must design their products to maintain patient safety and prevent compromised personal health information in the face of cybersecurity threats. In his session at @ThingsExpo...
    "We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
    SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
    SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
    Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
    Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution and join Akvelon expert and IoT industry leader, Sergey Grebnov, in his session at @ThingsExpo, for an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
    Because IoT devices are deployed in mission-critical environments more than ever before, it’s increasingly imperative they be truly smart. IoT sensors simply stockpiling data isn’t useful. IoT must be artificially and naturally intelligent in order to provide more value In his session at @ThingsExpo, John Crupi, Vice President and Engineering System Architect at Greenwave Systems, will discuss how IoT artificial intelligence (AI) can be carried out via edge analytics and machine learning techn...
    When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...