Click here to close now.

Welcome!

.NET Authors: Andreas Grabner, Pat Romanski, Elizabeth White, Tad Anderson, AppDynamics Blog

Related Topics: PowerBuilder, .NET

PowerBuilder: Article

Elegant Programming: Managing Functions | Part 3

Keep functions simple

Functions Must Be Short
Create a separate function for each logical sub-task, i.e., divide one long program into a number of short subprograms. The idea is named "Separation of concerns." Do that not only if the code will be re-used (i.e., called from more than one place) but even if it will only be called once. It's not a problem to have a lot of functions belonging to one task or business flow, even tens - a developer can always bring into focus only one of them. On the other hand, it's very difficult to understand how one intricate toilet-paper-long script works. Adherence to this rule will produce simple code even if the whole system is extremely complex, like software for a space ship or for brain surgy. The following tips will help you write code in a simple manner:

  • Ideally a function should be no longer than one screen (not including the header comments). Two screens are still acceptable, but three screens already bring up the issue of incorrect functions organization unless the function performs a long "black work" that cannot (or should not) be broken into pieces, for example, processing a big number of fields gotten from an external service, when each field is processed in a few lines.
  • The next acceptable advice was found by me in a programming book: "functions should contain up to approximately 100 lines of code not including comments."

Pay attention: the problem of functions that are too long usually goes together with the already discussed problem of extra indentation in Part 2.

Let's read what Jorn Olmheim wrote in the book 97 Things Every Programmer Should Know:

There is one quote, from Plato, that I think is particularly good for all software developers to know and keep close to their hearts:

"Beauty of style and harmony and grace and good rhythm depends on simplicity."

In one sentence, this sums up the values that we as software developers should aspire to. There are a number of things we strive for in our code:

  • Readability
  • Maintainability
  • Speed of development
  • The elusive quality of beauty

Plato is telling us that the enabling factor for all of these qualities is simplicity.

I have found that code that resonates with me, and that I consider beautiful, has a number of properties in common. Chief among these is simplicity. I find that no matter how complex the total application or system is, the individual parts have to be kept simple: simple objects with a single responsibility containing similarly simple, focused methods with descriptive names.

The bottom line is that beautiful code is simple code. Each individual part is kept simple with simple responsibilities and simple relationships with the other parts of the system. This is the way we can keep our systems maintainable over time, with clean, simple, testable code, ensuring a high speed of development throughout the lifetime of the system.

Beauty is born of and found in simplicity.

Steve McConnell writes in Code Complete:

A large percentage of routines in object-oriented programs will be accessor routines, which will be very short. From time to time, a complex algorithm will lead to a longer routine, and in those circumstances, the routine should be allowed to grow organically up to 100-200 lines.

Let issues such as the routine's cohesion, depth of nesting, number of variables, number of decision points, number of comments needed to explain the routine, and other complexity-related considerations dictate the length of the routine rather than imposing a length restriction per se.

That said, if you want to write routines longer than about 200 lines, be careful. None of the studies that reported decreased cost, decreased error rates, or both with larger routines distinguished among sizes larger than 200 lines, and you're bound to run into an upper limit of understandability as you pass 200 lines of code.

And now ideas from different developers, found on the Internet:

"When reading code for a single function, you should be able to remember (mostly) what it is doing from beginning to the end. If you get partway through a function and start thinking "what was this function supposed to be doing again?" then that's a sure sign that it's too long..."

***

"Usually if it can't fit on my screen, it's a candidate for refactoring. But, screen size does vary, so I usually look for under 25-30 lines."

***

"IMO you should worry about keeping your methods short and having them do one "thing" equally. I have seen a lot of cases where a method does "one" thing that requires extremely verbose code - generating an XML document, for example, and it's not an excuse for letting the method grow extremely long."

***

"...you should make functions as small as you can make them, as long as they remain discrete sensible operations in your domain. If you break a function ab() up into a() and b() and it would NEVER make sense to call a() without immediately calling b() afterwards, you haven't gained much. Perhaps it's easier to test or refactor, you might argue, but if it really never makes sense to call a() without b(), then the more valuable test is a() followed by b(). Make them as simple and short as they can be, but no simpler!"

***

"As a rule of thumb, I'd say that any method that does not fit on your screen is in dire need of refactoring (you should be able to grasp what a method is doing without having to scroll. Remember that you spend much more time reading code than writing it). ~20 lines is a reasonable maximum, though. Aside from method length, you should watch out for cyclomatic complexity i.e. the number of different paths that can be followed inside the method. Reducing cyclomatic complexity is as important as reducing method length (because a high CC reveals a method that is difficult to test, debug and understand)."

***

"During my first years of study, we had to write methods/functions with no more than 25 lines (and 80 columns max for each line). Nowadays I'm free to code the way I want but I think being forced to code that way was a good thing ... By writing small methods/functions, you more easily divide your code into reusable components, it's easier to test, and it's easier to maintain."

***

"I often end up writing methods with 10 - 30 lines. Sometimes I find longer methods suitable, when it's easier to read/test/maintain."

***

"My problem with big functions is that I want to hold everything that's happening in the code, in my head all at once. That's really the key. It also means that we're talking about a moving target. Because the goal is usability, the one screen rule really does make sense even though you can point to seeming flaws like varying screen resolutions. If you can see it all at once without paging around the editor, you are very much more likely to handle it all as a block.

What if you're working on a team? I suppose the best thing for the team would be to determine the lowest common denominator and target that size. If you have someone with a short attention-span or an IDE set up displaying around 20 lines, that's probably a good target. Another team might be good with 50.

And yeah, whatever your target is, sometimes you'll go over. 40 lines instead of 25 when the function can't really be broken up reasonably is fine here and there. You and your partners will deal with it. But the 3K line single-function VB6 modules that exist in some of my employer's legacy suites are insane!"

***

"I prefer to try to keep everything in a function on the same level of abstraction and as short as possibly. Programming is all about managing complexity and short one purpose functions make it easier to do that."

Keep Functions Simple, Part 2
The problem of mixed levels of abstraction

Don't mix different levels of abstraction in one function. The main function should call well-named sub-functions that call sub-sub-functions (and so on, and so on...), so a developer can easily "travel" up and down between different levels of abstraction (each time concentrating only on the current one) through hierarchies of any depth.

Kent Beck wrote in his book Smalltalk Best Practice Patterns: "Divide your program into methods that perform one identifiable task. Keep all of the operations in a method at the same level of abstraction. This will naturally result in programs with many small methods, each a few lines long."

Code Blocks Must Be Short
Don't allow code blocks to overgrow. A code block is a fragment, placed between opening and closing operators. These operators are:

  • Code branching operators (like IF ... ELSE ... END IF).
  • Looping operators (FOR ... NEXT, LOOP ... END LOOP, DO ... WHILE etc.)

The great idea is to keep the opening and closing operators on one screen. If you see that it's impossible, then think about extracting the block into a new function. It can also decrease the indenting; for example, the fragment:

if [condition] then
[very long code fragment with its own indents]
end if
will be looking in the new function as
if not [condition] then return
[very long code fragment with its own indents]

But that is the subject of the next paragraph:

Code After Validations
If a large code fragment is executed after a few validations (and is placed inside a few if-s), take them all (the fragment and the validations) into a new function and exit that function just after one of the validations has failed. It will not only save your code from extra indenting but also convey the following information: the whole algorithm (not its part) is executed after all the preliminary validations have been passed successfully. For example, if the first line in a function is "if not <condition> then return" and the code is longer than one screen, the function readers immediately know that all the executed stuff is done only if the condition is satisfied, while if there is an "if" block like "if <condition> then <many-screens-code-fragment> end if" then the function readers are forced to scroll down to see if there is any code after the "end if" (executed always). See how you can convert code from monstrous to elegant:

*** BAD code: ***

if this.uf_data_ok_1() then
if this.uf_data_ok_2() then
if this.uf_data_ok_3() then
[code fragment with its own indents]
end if
end if
end if

*** GOOD code (taken into a new function): ***

if not this.uf_data_ok_1() then return
if not this.uf_data_ok_2() then return
if not this.uf_data_ok_3() then return

Here you can ask: and what about the "single point of exit" rule? I don't want to discuss it here, but this idea produces more problems than it solves. I agree with Dijkstra who was strongly opposed to the concept of a single point of exit (it can simplify debugging in particular circumstances, but why do I have to suffer from working everyday with more complicated code only for the sake of simplifying possible debugging which may never happen?).

If a code fragment is not very long after many validations and you don't want to extract it into a special function, use the exceptions mechanism: put the whole fragment between try and catch, throwing an exception on failure of any of the sub-validations and process it locally (without propagating outwards). If you don't want to use exceptions for any reason, then use one of the following tricks: the "flag method" or the "fake loop method" but not the "multi-indents method".

*** GOOD code ("flag method"): ***

boolean lb_ok

lb_ok = this.uf_data_ok_1()

if lb_ok then
lb_ok = this.uf_data_ok_2()
end if

if lb_ok then
lb_ok = this.uf_data_ok_3()
end if

if lb_ok then
[code fragment with its own indenting levels]
end if

*** Another GOOD code ("fake loop method"): ***

boolean lb_ok

do while true
lb_ok = false
if not this.uf_data_ok_1() then break
if not this.uf_data_ok_() then break
if not this.uf_data_ok_3() then break
lb_ok = true
break
loop

if lb_ok then
[code fragment with its own indenting levels]
end if

As you can see, the fake loop is an eternal loop with an unconditional break at the end of the first iteration, so the second iteration will never happen. This solution is looking strange (a loop construction that never loops), but it works.

Return Policy
Functions must return values to the calling script only when it makes sense. A function is allowed to return a value using a return statement only if at least one of the following conditions is true:

  • The main purpose of the function is to obtain the value, and there is only one value to return.
  • The main purpose of the function is to perform some action, but the returned value is important for the calling script (not for error processing - there are exceptions for that), and there is only one value to return, for example, the main purpose of uf_retrieve is to retrieve data from the database, but, in addition, it returns the number of retrieved rows so the calling script is more efficient because it doesn't need to call RowCount().

A function must not return a value using a return statement (i.e., "(none)" must be selected as the returned type in the function's signature) if at least one of the following conditions is true:

  • The function's purpose is to perform some action, and there is nothing useful to return to the calling script.
  • The function (of any purpose) must return more than one value - they all (!!!) are returned using "ref" arguments. It's considered a very bad programming style if both the mechanisms are used: return statement and by-reference arguments!

You can ask: what is the problem with having a meaningless, but harmless return 1 at the bottom of the script? Nothing catastrophic, but the return value is a part of the function's contract with the outer world, and each detail of that contract is important and must make sense. Looking at the function's interface, developers will make conclusions about its functionality, so if a value is returned, that has a reason and the returned value should somehow be processed in the calling script... You know, it's like adding to a function of one extra argument of type int, which is not used inside, and always passing 1 through it. That argument will also be harmless and not catastrophic, but unnecessary and foolish in the same way as the discussed "return 1".

Use REF Keyword
When you pass actual arguments to a function by reference, always add "ref". In fact, this short keyword is playing the role of a comment:

dw_main.GetChild("status", ref ldwc_status)

It really helps to understand scripts, especially when calling functions have multiple arguments in both the logical directions, "in" and "out". It was a bad solution for PB creators to make ref an optional keyword; let's make it required of our own free will.

No Global Variables and Functions
Never create global variables and functions! They are an atavism that has survived from the early days of programming. Modern technologies (like Java and .NET) don't have these obsolete features at all. PB has them only for backward compatibility, so don't create new ones (there is only one exception - global functions, used in DW expressions if other solutions are more problematic).

All developers, using the object-oriented approach, know about encapsulation, so usually there are no questions about global variables - they are an "obvious evil." But what's so bad about global functions? If you have a small, simple function, making it a public function of an NVO (instead of a global function) seems to provide no advantage at first glance, but...

  • If you will want to extract a part of the function's code into a new function (according to the principle of creation a subprogram for each logical task or simply because the script has become too long), it will be impossible without creating another global function. And if you need 20 such functions in the future? You have two bad choices: to create an additional global function or not to create it (in the last case the script will be left as a long, unreadable and hardy managed buggers muddle). But if you have created an NVO as a container for your function (which is declared "public"), then you can add to that NVO any number of additional "service" functions ("private" if they are not intended to be called from outside).
  • If you need to create a number of different functions, related to a same task/flow, putting them in one NVO will not only decrease the quantity of objects in the PBL, but will also signal that they are somehow related to each other. It's definitely better than a PBL overloaded with a crazy mix of tens or even hundreds of global functions belonging to different logical units.

The programmed process may require you to store data (for example, between calls to the function, or to cache data, retrieved once, for multi-times use by different consumers over the application). If a global function is used, your bad choices are global vars and using another NVO (in the last case you will have related stuff in different locations). But if you have created the function in an NVO, then there are no problems - declare instance variables (as well as constants for safer and more elegant code).

Refactor Identical Code
Merge functions with duplicating functionality into one generic function. If such a function appears in classes inherited from the same ancestor, create the generic function in the ancestor and call it from the descendants. If the classes are not inherited from one ancestor, then create the generic function in a third class (even if you have to create that class for only one function). If you find yourself thinking whether to duplicate code using copy-paste (10 minutes of work) or take it to a third place (two hours including testing) - stop thinking immediately. Never think about that, even in last days of your contract - simply take the code to a third place and call it from where needed. If you are still in doubt about spending your time (which really belongs to the company), ask your manager, but it's better to do the work well and after that to explain to the manager why it has taken longer time. If the manager understands what quality programming is then your effort will be appreciated.

Refactor Similar Code
Merge functions with similar functionality into one generic function. Different parts of the application must supply specific (uncommon) data to a generic (universal) algorithm implemented only once.

If the functions, being taken by you into a third place to prevent duplication, are very similar but not exactly identical, you need to exploit your brain a little bit more. Do the following:

  • Merge the original scripts as described in "Refactor identical code" removing code duplication in the maximum way you can.
  • Supply the different stuff (unique for each original function) from the application areas the original functions appeared in before.

For example, we have two original functions in different classes that are like these (fragments 1 and 2 of the second object are exactly as fragments 1 and 2 accordingly of the first class, but the DataObjects are different):

*** BAD code: ***

uf_some_function() of the first class:
[fragment 1]
is_entity = "Car"
[fragment 2]
uf_some_function() of the second class:
[fragment 1] // exactly like [fragment 1] in the first class
is_entity = "Bike" // oops, it's different from the same place in the first class...
[fragment 2] // exactly like [fragment 2] in the first class

*** GOOD code: ***

uf_some_function() moved into the ancestor class:
[fragment 1]
is_entity = this.uf_get_entity()
[fragment 2]

We use the function uf_get_entity to overcome the problem of difference between two the discussed functions. uf_get_entity is created in the ancestor class (as a placeholder, returning NULL or empty string) and implemented in the descendants to supply specific entities descriptions: in the first descendant the function should be coded as "return "Car"", in the second one - as "return "Bike"".

If the function is taken into a third class (that doesn't belong to the inheritance hierarchy) then the specific (different) data can be supplied as argument(s) of the new merged function, so the fragment "is_entity = this.uf_get_entity()" will become "is_entity = as_entity".

Finally, there is one more method to achieving the goal - we can populate is_entity while initializing the instance (for example, in its constructor), but this approach is not always applicable.

Of course, it's better to spend some time before development and think about how to organize classes instead of thoughtless straightforward coding that forces the Ctrl+C and Ctrl+V keys to work hard.

Forget the Keyword "DYNAMIC"
Don't call functions and events dynamically. Instead, cast to the needed data type (which has the function/event) and call it statically. Instead of:

int li_wheels_qty
Window lw_transport

lw_transport = uf_get_transport_window()
li_wheels_qty = lw_transport.dynamic wf_get_wheels_qty()

write:

int li_wheels_qty
string ls_win_name
Window lw_transport
w_car lw_car
w_bike lw_bike
w_plane lw_plane

lw_transport = uf_get_transport_window()
ls_win_name = lw_transport.ClassName()
choose case ls_win_name
case "w_car"
lw_car = lw_transport
li_wheels_qty = lw_car.wf_get_wheels_qty()
case "w_bike"
lw_bike = lw_transport
li_wheels_qty = lw_bike.wf_get_wheels_qty()
case "w_plane"
lw_plane = lw_transport
li_wheels_qty = lw_plane.wf_get_wheels_qty()
case else
f_throw(PopulateError(0, "Unexpected window " + ls_win_name)
end choose

That approach requires more lines of code, but it has the following advantages:

  • Clarity to code readers. Developers immediately see the whole picture (all the possible situations)  and oppositely, dynamic calls hide the picture and require guessing or an annoying investigation (if that information is needed).
  • Type-safety. If one day uf_get_transport_window() will return w_boat, there will be a readable message generated (saying what is the problem and where it occurred) instead of an application failure. Possibly, the developer will decide to extend the "choose case" construction with case "w_boat" (which will not call uf_get_transport_window()).

More Stories By Michael Zuskin

Michael Zuskin is a certified software professional with sophisticated programming skills and experience in Enterprise Software Development.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing demand and the rapidly changing workspace model.
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. He discussed opportunities and challenges ahead for the IoT from a market and technical point of vie...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
Advanced Persistent Threats (APTs) are increasing at an unprecedented rate. The threat landscape of today is drastically different than just a few years ago. Attacks are much more organized and sophisticated. They are harder to detect and even harder to anticipate. In the foreseeable future it's going to get a whole lot harder. Everything you know today will change. Keeping up with this changing landscape is already a daunting task. Your organization needs to use the latest tools, methods and expertise to guard against those threats. But will that be enough? In the foreseeable future attacks w...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along with a steady stream of well-publicized data breaches, only add to the uncertainty
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
HP and Aruba Networks on Monday announced a definitive agreement for HP to acquire Aruba, a provider of next-generation network access solutions for the mobile enterprise, for $24.67 per share in cash. The equity value of the transaction is approximately $3.0 billion, and net of cash and debt approximately $2.7 billion. Both companies' boards of directors have approved the deal. "Enterprises are facing a mobile-first world and are looking for solutions that help them transition legacy investments to the new style of IT," said Meg Whitman, Chairman, President and Chief Executive Officer of HP...
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch of Docker's initial release in March of 2013, interest was revved up several notches. Then late last...
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...