Welcome!

.NET Authors: Lori MacVittie, Yeshim Deniz, Ivan Antsipau, Liz McMillan, Michael Bushong

Related Topics: PowerBuilder, .NET

PowerBuilder: Article

Elegant Programming: Managing Functions | Part 3

Keep functions simple

Functions Must Be Short
Create a separate function for each logical sub-task, i.e., divide one long program into a number of short subprograms. The idea is named "Separation of concerns." Do that not only if the code will be re-used (i.e., called from more than one place) but even if it will only be called once. It's not a problem to have a lot of functions belonging to one task or business flow, even tens - a developer can always bring into focus only one of them. On the other hand, it's very difficult to understand how one intricate toilet-paper-long script works. Adherence to this rule will produce simple code even if the whole system is extremely complex, like software for a space ship or for brain surgy. The following tips will help you write code in a simple manner:

  • Ideally a function should be no longer than one screen (not including the header comments). Two screens are still acceptable, but three screens already bring up the issue of incorrect functions organization unless the function performs a long "black work" that cannot (or should not) be broken into pieces, for example, processing a big number of fields gotten from an external service, when each field is processed in a few lines.
  • The next acceptable advice was found by me in a programming book: "functions should contain up to approximately 100 lines of code not including comments."

Pay attention: the problem of functions that are too long usually goes together with the already discussed problem of extra indentation in Part 2.

Let's read what Jorn Olmheim wrote in the book 97 Things Every Programmer Should Know:

There is one quote, from Plato, that I think is particularly good for all software developers to know and keep close to their hearts:

"Beauty of style and harmony and grace and good rhythm depends on simplicity."

In one sentence, this sums up the values that we as software developers should aspire to. There are a number of things we strive for in our code:

  • Readability
  • Maintainability
  • Speed of development
  • The elusive quality of beauty

Plato is telling us that the enabling factor for all of these qualities is simplicity.

I have found that code that resonates with me, and that I consider beautiful, has a number of properties in common. Chief among these is simplicity. I find that no matter how complex the total application or system is, the individual parts have to be kept simple: simple objects with a single responsibility containing similarly simple, focused methods with descriptive names.

The bottom line is that beautiful code is simple code. Each individual part is kept simple with simple responsibilities and simple relationships with the other parts of the system. This is the way we can keep our systems maintainable over time, with clean, simple, testable code, ensuring a high speed of development throughout the lifetime of the system.

Beauty is born of and found in simplicity.

Steve McConnell writes in Code Complete:

A large percentage of routines in object-oriented programs will be accessor routines, which will be very short. From time to time, a complex algorithm will lead to a longer routine, and in those circumstances, the routine should be allowed to grow organically up to 100-200 lines.

Let issues such as the routine's cohesion, depth of nesting, number of variables, number of decision points, number of comments needed to explain the routine, and other complexity-related considerations dictate the length of the routine rather than imposing a length restriction per se.

That said, if you want to write routines longer than about 200 lines, be careful. None of the studies that reported decreased cost, decreased error rates, or both with larger routines distinguished among sizes larger than 200 lines, and you're bound to run into an upper limit of understandability as you pass 200 lines of code.

And now ideas from different developers, found on the Internet:

"When reading code for a single function, you should be able to remember (mostly) what it is doing from beginning to the end. If you get partway through a function and start thinking "what was this function supposed to be doing again?" then that's a sure sign that it's too long..."

***

"Usually if it can't fit on my screen, it's a candidate for refactoring. But, screen size does vary, so I usually look for under 25-30 lines."

***

"IMO you should worry about keeping your methods short and having them do one "thing" equally. I have seen a lot of cases where a method does "one" thing that requires extremely verbose code - generating an XML document, for example, and it's not an excuse for letting the method grow extremely long."

***

"...you should make functions as small as you can make them, as long as they remain discrete sensible operations in your domain. If you break a function ab() up into a() and b() and it would NEVER make sense to call a() without immediately calling b() afterwards, you haven't gained much. Perhaps it's easier to test or refactor, you might argue, but if it really never makes sense to call a() without b(), then the more valuable test is a() followed by b(). Make them as simple and short as they can be, but no simpler!"

***

"As a rule of thumb, I'd say that any method that does not fit on your screen is in dire need of refactoring (you should be able to grasp what a method is doing without having to scroll. Remember that you spend much more time reading code than writing it). ~20 lines is a reasonable maximum, though. Aside from method length, you should watch out for cyclomatic complexity i.e. the number of different paths that can be followed inside the method. Reducing cyclomatic complexity is as important as reducing method length (because a high CC reveals a method that is difficult to test, debug and understand)."

***

"During my first years of study, we had to write methods/functions with no more than 25 lines (and 80 columns max for each line). Nowadays I'm free to code the way I want but I think being forced to code that way was a good thing ... By writing small methods/functions, you more easily divide your code into reusable components, it's easier to test, and it's easier to maintain."

***

"I often end up writing methods with 10 - 30 lines. Sometimes I find longer methods suitable, when it's easier to read/test/maintain."

***

"My problem with big functions is that I want to hold everything that's happening in the code, in my head all at once. That's really the key. It also means that we're talking about a moving target. Because the goal is usability, the one screen rule really does make sense even though you can point to seeming flaws like varying screen resolutions. If you can see it all at once without paging around the editor, you are very much more likely to handle it all as a block.

What if you're working on a team? I suppose the best thing for the team would be to determine the lowest common denominator and target that size. If you have someone with a short attention-span or an IDE set up displaying around 20 lines, that's probably a good target. Another team might be good with 50.

And yeah, whatever your target is, sometimes you'll go over. 40 lines instead of 25 when the function can't really be broken up reasonably is fine here and there. You and your partners will deal with it. But the 3K line single-function VB6 modules that exist in some of my employer's legacy suites are insane!"

***

"I prefer to try to keep everything in a function on the same level of abstraction and as short as possibly. Programming is all about managing complexity and short one purpose functions make it easier to do that."

Keep Functions Simple, Part 2
The problem of mixed levels of abstraction

Don't mix different levels of abstraction in one function. The main function should call well-named sub-functions that call sub-sub-functions (and so on, and so on...), so a developer can easily "travel" up and down between different levels of abstraction (each time concentrating only on the current one) through hierarchies of any depth.

Kent Beck wrote in his book Smalltalk Best Practice Patterns: "Divide your program into methods that perform one identifiable task. Keep all of the operations in a method at the same level of abstraction. This will naturally result in programs with many small methods, each a few lines long."

Code Blocks Must Be Short
Don't allow code blocks to overgrow. A code block is a fragment, placed between opening and closing operators. These operators are:

  • Code branching operators (like IF ... ELSE ... END IF).
  • Looping operators (FOR ... NEXT, LOOP ... END LOOP, DO ... WHILE etc.)

The great idea is to keep the opening and closing operators on one screen. If you see that it's impossible, then think about extracting the block into a new function. It can also decrease the indenting; for example, the fragment:

if [condition] then
[very long code fragment with its own indents]
end if
will be looking in the new function as
if not [condition] then return
[very long code fragment with its own indents]

But that is the subject of the next paragraph:

Code After Validations
If a large code fragment is executed after a few validations (and is placed inside a few if-s), take them all (the fragment and the validations) into a new function and exit that function just after one of the validations has failed. It will not only save your code from extra indenting but also convey the following information: the whole algorithm (not its part) is executed after all the preliminary validations have been passed successfully. For example, if the first line in a function is "if not <condition> then return" and the code is longer than one screen, the function readers immediately know that all the executed stuff is done only if the condition is satisfied, while if there is an "if" block like "if <condition> then <many-screens-code-fragment> end if" then the function readers are forced to scroll down to see if there is any code after the "end if" (executed always). See how you can convert code from monstrous to elegant:

*** BAD code: ***

if this.uf_data_ok_1() then
if this.uf_data_ok_2() then
if this.uf_data_ok_3() then
[code fragment with its own indents]
end if
end if
end if

*** GOOD code (taken into a new function): ***

if not this.uf_data_ok_1() then return
if not this.uf_data_ok_2() then return
if not this.uf_data_ok_3() then return

Here you can ask: and what about the "single point of exit" rule? I don't want to discuss it here, but this idea produces more problems than it solves. I agree with Dijkstra who was strongly opposed to the concept of a single point of exit (it can simplify debugging in particular circumstances, but why do I have to suffer from working everyday with more complicated code only for the sake of simplifying possible debugging which may never happen?).

If a code fragment is not very long after many validations and you don't want to extract it into a special function, use the exceptions mechanism: put the whole fragment between try and catch, throwing an exception on failure of any of the sub-validations and process it locally (without propagating outwards). If you don't want to use exceptions for any reason, then use one of the following tricks: the "flag method" or the "fake loop method" but not the "multi-indents method".

*** GOOD code ("flag method"): ***

boolean lb_ok

lb_ok = this.uf_data_ok_1()

if lb_ok then
lb_ok = this.uf_data_ok_2()
end if

if lb_ok then
lb_ok = this.uf_data_ok_3()
end if

if lb_ok then
[code fragment with its own indenting levels]
end if

*** Another GOOD code ("fake loop method"): ***

boolean lb_ok

do while true
lb_ok = false
if not this.uf_data_ok_1() then break
if not this.uf_data_ok_() then break
if not this.uf_data_ok_3() then break
lb_ok = true
break
loop

if lb_ok then
[code fragment with its own indenting levels]
end if

As you can see, the fake loop is an eternal loop with an unconditional break at the end of the first iteration, so the second iteration will never happen. This solution is looking strange (a loop construction that never loops), but it works.

Return Policy
Functions must return values to the calling script only when it makes sense. A function is allowed to return a value using a return statement only if at least one of the following conditions is true:

  • The main purpose of the function is to obtain the value, and there is only one value to return.
  • The main purpose of the function is to perform some action, but the returned value is important for the calling script (not for error processing - there are exceptions for that), and there is only one value to return, for example, the main purpose of uf_retrieve is to retrieve data from the database, but, in addition, it returns the number of retrieved rows so the calling script is more efficient because it doesn't need to call RowCount().

A function must not return a value using a return statement (i.e., "(none)" must be selected as the returned type in the function's signature) if at least one of the following conditions is true:

  • The function's purpose is to perform some action, and there is nothing useful to return to the calling script.
  • The function (of any purpose) must return more than one value - they all (!!!) are returned using "ref" arguments. It's considered a very bad programming style if both the mechanisms are used: return statement and by-reference arguments!

You can ask: what is the problem with having a meaningless, but harmless return 1 at the bottom of the script? Nothing catastrophic, but the return value is a part of the function's contract with the outer world, and each detail of that contract is important and must make sense. Looking at the function's interface, developers will make conclusions about its functionality, so if a value is returned, that has a reason and the returned value should somehow be processed in the calling script... You know, it's like adding to a function of one extra argument of type int, which is not used inside, and always passing 1 through it. That argument will also be harmless and not catastrophic, but unnecessary and foolish in the same way as the discussed "return 1".

Use REF Keyword
When you pass actual arguments to a function by reference, always add "ref". In fact, this short keyword is playing the role of a comment:

dw_main.GetChild("status", ref ldwc_status)

It really helps to understand scripts, especially when calling functions have multiple arguments in both the logical directions, "in" and "out". It was a bad solution for PB creators to make ref an optional keyword; let's make it required of our own free will.

No Global Variables and Functions
Never create global variables and functions! They are an atavism that has survived from the early days of programming. Modern technologies (like Java and .NET) don't have these obsolete features at all. PB has them only for backward compatibility, so don't create new ones (there is only one exception - global functions, used in DW expressions if other solutions are more problematic).

All developers, using the object-oriented approach, know about encapsulation, so usually there are no questions about global variables - they are an "obvious evil." But what's so bad about global functions? If you have a small, simple function, making it a public function of an NVO (instead of a global function) seems to provide no advantage at first glance, but...

  • If you will want to extract a part of the function's code into a new function (according to the principle of creation a subprogram for each logical task or simply because the script has become too long), it will be impossible without creating another global function. And if you need 20 such functions in the future? You have two bad choices: to create an additional global function or not to create it (in the last case the script will be left as a long, unreadable and hardy managed buggers muddle). But if you have created an NVO as a container for your function (which is declared "public"), then you can add to that NVO any number of additional "service" functions ("private" if they are not intended to be called from outside).
  • If you need to create a number of different functions, related to a same task/flow, putting them in one NVO will not only decrease the quantity of objects in the PBL, but will also signal that they are somehow related to each other. It's definitely better than a PBL overloaded with a crazy mix of tens or even hundreds of global functions belonging to different logical units.

The programmed process may require you to store data (for example, between calls to the function, or to cache data, retrieved once, for multi-times use by different consumers over the application). If a global function is used, your bad choices are global vars and using another NVO (in the last case you will have related stuff in different locations). But if you have created the function in an NVO, then there are no problems - declare instance variables (as well as constants for safer and more elegant code).

Refactor Identical Code
Merge functions with duplicating functionality into one generic function. If such a function appears in classes inherited from the same ancestor, create the generic function in the ancestor and call it from the descendants. If the classes are not inherited from one ancestor, then create the generic function in a third class (even if you have to create that class for only one function). If you find yourself thinking whether to duplicate code using copy-paste (10 minutes of work) or take it to a third place (two hours including testing) - stop thinking immediately. Never think about that, even in last days of your contract - simply take the code to a third place and call it from where needed. If you are still in doubt about spending your time (which really belongs to the company), ask your manager, but it's better to do the work well and after that to explain to the manager why it has taken longer time. If the manager understands what quality programming is then your effort will be appreciated.

Refactor Similar Code
Merge functions with similar functionality into one generic function. Different parts of the application must supply specific (uncommon) data to a generic (universal) algorithm implemented only once.

If the functions, being taken by you into a third place to prevent duplication, are very similar but not exactly identical, you need to exploit your brain a little bit more. Do the following:

  • Merge the original scripts as described in "Refactor identical code" removing code duplication in the maximum way you can.
  • Supply the different stuff (unique for each original function) from the application areas the original functions appeared in before.

For example, we have two original functions in different classes that are like these (fragments 1 and 2 of the second object are exactly as fragments 1 and 2 accordingly of the first class, but the DataObjects are different):

*** BAD code: ***

uf_some_function() of the first class:
[fragment 1]
is_entity = "Car"
[fragment 2]
uf_some_function() of the second class:
[fragment 1] // exactly like [fragment 1] in the first class
is_entity = "Bike" // oops, it's different from the same place in the first class...
[fragment 2] // exactly like [fragment 2] in the first class

*** GOOD code: ***

uf_some_function() moved into the ancestor class:
[fragment 1]
is_entity = this.uf_get_entity()
[fragment 2]

We use the function uf_get_entity to overcome the problem of difference between two the discussed functions. uf_get_entity is created in the ancestor class (as a placeholder, returning NULL or empty string) and implemented in the descendants to supply specific entities descriptions: in the first descendant the function should be coded as "return "Car"", in the second one - as "return "Bike"".

If the function is taken into a third class (that doesn't belong to the inheritance hierarchy) then the specific (different) data can be supplied as argument(s) of the new merged function, so the fragment "is_entity = this.uf_get_entity()" will become "is_entity = as_entity".

Finally, there is one more method to achieving the goal - we can populate is_entity while initializing the instance (for example, in its constructor), but this approach is not always applicable.

Of course, it's better to spend some time before development and think about how to organize classes instead of thoughtless straightforward coding that forces the Ctrl+C and Ctrl+V keys to work hard.

Forget the Keyword "DYNAMIC"
Don't call functions and events dynamically. Instead, cast to the needed data type (which has the function/event) and call it statically. Instead of:

int li_wheels_qty
Window lw_transport

lw_transport = uf_get_transport_window()
li_wheels_qty = lw_transport.dynamic wf_get_wheels_qty()

write:

int li_wheels_qty
string ls_win_name
Window lw_transport
w_car lw_car
w_bike lw_bike
w_plane lw_plane

lw_transport = uf_get_transport_window()
ls_win_name = lw_transport.ClassName()
choose case ls_win_name
case "w_car"
lw_car = lw_transport
li_wheels_qty = lw_car.wf_get_wheels_qty()
case "w_bike"
lw_bike = lw_transport
li_wheels_qty = lw_bike.wf_get_wheels_qty()
case "w_plane"
lw_plane = lw_transport
li_wheels_qty = lw_plane.wf_get_wheels_qty()
case else
f_throw(PopulateError(0, "Unexpected window " + ls_win_name)
end choose

That approach requires more lines of code, but it has the following advantages:

  • Clarity to code readers. Developers immediately see the whole picture (all the possible situations)  and oppositely, dynamic calls hide the picture and require guessing or an annoying investigation (if that information is needed).
  • Type-safety. If one day uf_get_transport_window() will return w_boat, there will be a readable message generated (saying what is the problem and where it occurred) instead of an application failure. Possibly, the developer will decide to extend the "choose case" construction with case "w_boat" (which will not call uf_get_transport_window()).

More Stories By Michael Zuskin

Michael Zuskin is a certified software professional with sophisticated programming skills and experience in Enterprise Software Development.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have spoken with, or attended presentations from, utilities in the United States, South America, Asia and Europe. This session will provide a look at the CREPE drivers for SmartGrids and the solution spaces used by SmartGrids today and planned for the near future. All organizations can learn from SmartGrid’s use of Predictive Maintenance, Demand Prediction, Cloud, Big Data and Customer-facing Dashboards...
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
IoT is still a vague buzzword for many people. In his session at Internet of @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, will discuss the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. The presentation will also discuss how IoT is perceived by investors and how venture capitalist access this space. Other topics to discuss are barriers to success, what is new, what is old, and what the future may hold.
Whether you're a startup or a 100 year old enterprise, the Internet of Things offers a variety of new capabilities for your business. IoT style solutions can help you get closer your customers, launch new product lines and take over an industry. Some companies are dipping their toes in, but many have already taken the plunge, all while dramatic new capabilities continue to emerge. In his session at Internet of @ThingsExpo, Reid Carlberg, Senior Director, Developer Evangelism at salesforce.com, to discuss real-world use cases, patterns and opportunities you can harness today.
All major researchers estimate there will be tens of billions devices – computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!
Noted IoT expert and researcher Joseph di Paolantonio (pictured below) has joined the @ThingsExpo faculty. Joseph, who describes himself as an “Independent Thinker” from DataArchon, will speak on the topic of “Smart Grids & Managing Big Utilities.” Over his career, Joseph di Paolantonio has worked in the energy, renewables, aerospace, telecommunications, and information technology industries. His expertise is in data analysis, system engineering, Bayesian statistics, data warehouses, business intelligence, data mining, predictive methods, and very large databases (VLDB). Prior to DataArchon, he served as a VP and Principal Analyst with Constellation Group. He is a member of the Boulder (Colo.) Brain Trust, an organization with a mission “to benefit the Business Intelligence and data management industry by providing pro bono exchange of information between vendors and independent analysts on new trends and technologies and to provide vendors with constructive feedback on their of...
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how these devices generate enough data to learn our behaviors and simplify/improve our lives. What if we could connect everything to everything? I'm not only talking about connecting things to things but also systems, cloud services, and people. Add in a little machine learning and artificial intelligence and now we have something interesting...
Last week, while in San Francisco, I used the Uber app and service four times. All four experiences were great, although one of the drivers stopped for 30 seconds and then left as I was walking up to the car. He must have realized I was a blogger. None the less, the next car was just a minute away and I suffered no pain. In this article, my colleague, Ved Sen, Global Head, Advisory Services Social, Mobile and Sensors at Cognizant shares his experiences and insights.
We are reaching the end of the beginning with WebRTC and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) irreversibly encoded. In his session at Internet of @ThingsExpo, Peter Dunkley, Technical Director at Acision, will look at how this identity problem can be solved and discuss ways to use existing web identities for real-time communication.
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT. Attendees will learn real-world benefits of WebRTC and explore future possibilities, as WebRTC and IoT intersect to improve customer service.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at Internet of @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, will share some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, an Open Source Cloud Communications company that helps the shift from legacy IN/SS7 telco networks to IP-based cloud comms. An early investor in multiple start-ups, he still finds time to code for his companies and contribute to open source projects.
The Internet of Things (IoT) promises to create new business models as significant as those that were inspired by the Internet and the smartphone 20 and 10 years ago. What business, social and practical implications will this phenomenon bring? That's the subject of "Monetizing the Internet of Things: Perspectives from the Front Lines," an e-book released today and available free of charge from Aria Systems, the leading innovator in recurring revenue management.
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehension and conference efficiency.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, will discuss single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example to explain some of these concepts including when to use different storage models.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. These technological reforms have not only changed computers and smartphones, but are also changing the data processing model for all information devices. In particular, in the area known as M2M (Machine-To-Machine), there are great expectations that information with a new type of value can be produced using a variety of devices and sensors saving/sharing data via the network and through large-scale cloud-type data processing. This consortium believes that attaching a huge number of devic...