Welcome!

Microsoft Cloud Authors: Lori MacVittie, Elizabeth White, Yeshim Deniz, Serafima Al, Janakiram MSV

Related Topics: Microsoft Cloud

Microsoft Cloud: Blog Feed Post

Crunchbase Data Mashed Into Microsoft Pivot

I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was

About two weeks ago I had the good fortune to spend some time at an offsite where I met Gary Flake.  I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was when I was introduced to him at the offsite.  As one of Microsoft’s Technical Fellows, he’s basically one of the 20 or so smartest engineers in the company.  Spending time with a guy like that is a treat, and this guy thinks about stuff that gets me excited.  Data and systems.

It’s a good thing Gary is so good at his job, because when he gave me the initial pitch for Pivot I thought it sounded about as interesting as a new sorting algorithm [NOTE: the downloads are restricted to token holders, so if you are interested in getting Pivot, hit me up on Twitter and I will get you one].  It wasn’t a great pitch.  Only after I saw the software in action, and lifting my jaw off the floor, did I run back over to Gary and offer to rewrite his 25 word pitch.  My motives were not all together altruistic.  I wanted access to the software, but more importantly I wanted access to the tools to create my own data sets.

The unofficial, not blessed by Microsoft, but how I would talk about Pivot is: a client application to explore user created data sets along multiple criteria in a rich, visual way.  In short, it’s Pivot Tables + Crack + WPF.  The demo datasets that Gary was showing were interesting, but nothing about the data was actionable.  It was informational, but not insight generating.  My brain jumped to dumping CRM data into Pivot…or a bug database…or a customer evidence set.  Things that were actionable, traditionally hard to search, and would benefit from a visual metaphor.  Then, like a ton of bricks, it hit me.  What about Crunchbase?

Spend a few minutes wandering around Crunchbase, and you realize what an incredibly rich dataset they have assembled, and yet the search and browse interface could be better.  It’s rather simplistic and it’s not possible to dive deeper into a search to refine it.  So that was my project.  I was going to use the Crunchbase API to generate a dataset for Pivot.  Sounded simple enough.  Here’s how I did it, and the result.  (here’s a link to the CXML for those of you with the Pivot browser and who want to see mine in action – WARNING: It takes about 20 seconds to load).

The Code

I have created a CodePlex project for the CrunchBase Grabber, and welcome any additions to the project.

The first problem I had to solve was how to take the JSON objects down and use them in C#.  I normally would have done something like this in Python and used the BeautifulSoup library, but I really wanted to do a soup to nuts C# project, and walk a mile in my customers’ shoes.  It turns out that we have a pretty good object for doing just this.  In the System.Web.Script.Serialization assembly (for which you have to add the reference to System.Web) there is a nice JavaScriptSerializer object.  This was nice to use, but the web information was a bit confusing.  It appears that this was deprecated in .NET 3.0, and then brought back in 3.5.  It’s back and it works.

What I liked about the JavaScriptSerializer was that it could take an arbitrary JSON object in as a stream, and then deserialize to an object of my creation.  I only needed to include the fields that I wanted from the object, so long as the names mapped to the items in the JSON object.  That made creating a custom class for just the data I wanted much easier than enumerating all of the possible data types.

        public string name;
        public string permaLink;
        public string homepage_url;
        public string crunchbase_url;
        public string category_code;
        public string description; // = "";
        public int? number_of_employees; // = 0;
        public string overview;
        public bool deadpool;
        public int? deadpool_year; //= "";
        public imgStructure image;
        public List<locStructure> offices;
        public string tag_list;
        public int? founded_year;
        public List<fndStructure> funding_rounds;
        public Dictionary<string, fndStructure> aggFundStructure = new Dictionary<string,fndStructure>();
        public List<string> keyword_tags;

There’s a couple of things I want to share which will make life a lot easier for you if you plan on using this JavaScriptSerializer.  First, know how to make a type nullable.  If you don’t know what that means, here’s the short version: for any type other than a string, add that “?” after the type and that will allow you to assign a null type to it.  Why is this important?  During the deserialization process, you are bound to hit null types from the stream.  This is especially true if you aren’t in control of the stream, as I wasn’t with Crunchbase.  That’s 4 hours of frustration from my life I just saved you.  I left my comments in there to show you I tried all kinds of things to solve this “assignment of null” exception, and none of them work.  Just use the “?”

Second is understanding the created data types that were used.  Most JSON objects will have nested data structures.  When that happens, you will need to have a new data type that you create with the same name of the data coming back from the object.  In this example, let’s look at the image data:

    public class imgStructure
    {
        public List<List<object>> available_sizes;
        public string attribution;
    }

The available_sizes actually comes back with a set of sizes and a relative file location.  Because there are numbers and text, the List of type object had to be used.  That’s another 3 hours I just saved you.  Here’s the JSON that came back so you can see what I mean:

 "image":
  {"available_sizes":
    [[[150,
       41],
      "assets/images/resized/0000/2755/2755v28-max-150x150.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-250x250.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-450x450.png"]],
   "attribution": null},

Getting at that data would prove difficult.

return baseURL + this.image.available_sizes[1][1].ToString();

 

Because I wanted the middle sized logo, and the location, I had to use the [1][1] to get a string.  Had I wanted the sizes, I would have needed a [1][0][0] or [1][0][1] because the first [0] returns the object which is an array.  Yes, it’s confusing and annoying, but if you know what you want, navigating the complex nested data type can be done.

There were actually two JSON streams I needed to parse.  The first was the Company list, which I retrieved by creating a CompanyGenerator class, which creates the WebRequest to the API to get the company list JSON and then parses that list into a list of company objects.

    public class CompanyGenerator
    {
        //this is how we call out to crunchbase to get their full list of companies
        public List<cbCompanyObject> GetCompanyNames()
        {
            string jsonStream;
            JavaScriptSerializer ser = new JavaScriptSerializer();

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create("http://api.crunchbase.com/v/1/companies.js");

            jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();

            //as opposed to the single company calls, this returns a list of companies, so we have to
            //stick it into a list
            List<cbCompanyObject> jsonCompanies = ser.Deserialize<List<cbCompanyObject>>(jsonStream);

            return jsonCompanies;
        }

    }

Once I had that list, it was a simple matter of iterating over the list and fetching the individual JSON objects per company.

            foreach (cbCompanyObject company in companyNames)
            {
                string jsonLine;

                //with a company name parsed from JSON, create the stream of the company specific JSON
                jsonStream = cjStream.GetJsonStream(company.name);

                if (jsonStream != null)
                {
                    try
                    {
                        //with the stream, now deserialize into the Crunchbase object
                        CrunchBase jsonCrunchBase = ser.Deserialize<CrunchBase>(jsonStream);

                        //assuming that worked, we need to clean up and create some additional meta data
                        jsonCrunchBase.FixCrunchBaseURL();
                        jsonCrunchBase.AggregateFunding();
                        jsonCrunchBase.SplitTagString();

 

Those functions FixCrunchBaseURL(), AggregateFunding() and SplitTagString() were post processing functions meant to get more specific data for my needs.  The AggregateFunding() function was really good times, and an exercise for the reader should you want to enjoy the fun of trying to parse an arbitrary number of nested objects for funding events, and assigning the funding to the right type, and summing the total funding per round.

Since the data is all user generated, and there’s no guarantee that the data is reliable, I had to trap the exception of a company URL simply not existing:

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create(apiUrlBase + companyName + urlEnd);

            try
            {
                jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();
                return jsonStream;
            }
            catch (System.Net.WebException e)
            {
                Console.WriteLine("Company: {0} - URL bad", companyName);
            }

I thought it strange that the company list would return permalinks to companies that are no longer listed in Crunchbase or have a JSON dataset, but as long as you trap the event, things are fine.  Once the data came back and I put it into the object, I could selectively dump data to a text file.

So that’s a simple walk through of how my code accessed the CrunchBase API in preparation for creating my Pivot data set.  Again, I have created a CodePlex project for the CrunchBase Grabber and welcome additions.

Data Set Creation

Knowing what I knew about how the Excel add in worked, I created my text file to have well defined delimiters and column headings.  I couldn’t sort out how to import the HTML which was returned in the JSON for the Company Overview and not have Excel puke on the import.  That’s a nice to have that I will get to at a later time.

It turns out that using the tool to create the columns is less error prone than simply trying to insert them yourself.

image By creating the columns ahead of time, I could simply copy and past from my imported tab delimited file into my Pivot collection.  Here’s another tip – if you have a lot of image locations that are sitting on a server offsite (as in, on the Crunchbase servers) save that copy and paste for last.  By inserting the URLs into the Pivot data set XLS, the Pivot add-in will try to go fetch all of the images, which can take some time.

I processed my text file down from about 15K good entries down to about 4K.  The first criteria was that the company had to have a logo.  Second, it had to have funding and had to have a country, a founding year, and a category listed.  I had been given the heads up that anything more than about 5K objects in a single CXML file would be problematic.

image I also wanted to ensure that some of the fields were not used for filtering but did show up in the information panel.  Luckily the tool made this pretty simple.  By simply moving the cursor to the desired column, you can tick off the check boxes to change where data will appear and how it can be used by the user.  This is a nice touch of the Excel add in tool.

Once the data was all in, I clicked the “Publish Collection” button and wandered off for an age or two.  It took, erm, a little bit of time, even on my jacked up laptop, to process the collection and create the CXML file.  If you have access to the Pivot app, you can point your browser at this URL to see the final result.  For those of you who don’t have access to the Pivot Browser, I have included a few screen caps to show what the resulting dataset looked like.

clip_image001

The first shot is what the full data set renders to in the window.  That’s all 4000 companies, and the Pivot criteria are on the left.  The really cool thing about Pivot is the way you can explore a data set.  Start with the full set of companies, and pivot on the web companies.  Refine that to be only companies in CA and WA.  Decide that you want companies funded between 2004 and 2006, and only those that had between $2 million and $5 million.  You can do that, in real time, and all the data reorganizes itself.  Then you can click on a company logo and get additional information.  Another example screen cap.

clip_image001[6]

All of the filtering happens in real time, and utilizes the DeepZoom technology.  When you change your query criteria, any additional data is fetched via simple HTTP requests, and it’s all quite fast.  For those of you with the Pivot app, you can see how quickly this exploration renders once you have loaded the CXML.

For my Pivot data set, I opted to allow the search to pivot on were: company category, number of employees, city, state, country, year funded, total funding, and keyword tags.  It makes for some good data deep dive.  I want my next iteration to have funding companies as a pivot point as well.  Would be nice to see which investors are in bed together the most.

Put simply, I am stunned by this technology.  I have barely scratched the surface of what is possible with building data sets for Pivot.  I plan to spend quite a bit of my free time in the next few weeks playing with this and thinking about additional data sources to plug into this.  I love that we are building such cool stuff at our company, and I love how accessible it was to an inquisitive mind.  I cannot wait to see what other data sets get created.

Read the original blog entry...

More Stories By Brandon Watson

Brandon Watson is Director for Windows Phone 7. He specifically focuses on developers and the developer platform. He rejoined Microsoft in 2008 after nearly a decade on Wall Street and running successful start-ups. He has both an engineering degree and an economics degree from the University of Pennsylvania, as well as an MBA from The Wharton School of Business, and blogs at www.manyniches.com.

@ThingsExpo Stories
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Archi...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
DXWorldEXPO LLC announced today that the upcoming DXWorldEXPO | CloudEXPO New York event will feature 10 companies from Poland to participate at the "Poland Digital Transformation Pavilion" on November 12-13, 2018.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. All In Mobile is a mobile app development company from Poland. Since 2014, they maintain passion for developing mobile applications for enterprises and startups worldwide.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
DXWorldEXPO LLC announced today that ICC-USA, a computer systems integrator and server manufacturing company focused on developing products and product appliances, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City. ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of ...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smart...
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.