Welcome!

Microsoft Cloud Authors: Liz McMillan, David H Deans, Pat Romanski, Janakiram MSV, Jnan Dash

Related Topics: Microsoft Cloud

Microsoft Cloud: Blog Feed Post

Crunchbase Data Mashed Into Microsoft Pivot

I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was

About two weeks ago I had the good fortune to spend some time at an offsite where I met Gary Flake.  I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was when I was introduced to him at the offsite.  As one of Microsoft’s Technical Fellows, he’s basically one of the 20 or so smartest engineers in the company.  Spending time with a guy like that is a treat, and this guy thinks about stuff that gets me excited.  Data and systems.

It’s a good thing Gary is so good at his job, because when he gave me the initial pitch for Pivot I thought it sounded about as interesting as a new sorting algorithm [NOTE: the downloads are restricted to token holders, so if you are interested in getting Pivot, hit me up on Twitter and I will get you one].  It wasn’t a great pitch.  Only after I saw the software in action, and lifting my jaw off the floor, did I run back over to Gary and offer to rewrite his 25 word pitch.  My motives were not all together altruistic.  I wanted access to the software, but more importantly I wanted access to the tools to create my own data sets.

The unofficial, not blessed by Microsoft, but how I would talk about Pivot is: a client application to explore user created data sets along multiple criteria in a rich, visual way.  In short, it’s Pivot Tables + Crack + WPF.  The demo datasets that Gary was showing were interesting, but nothing about the data was actionable.  It was informational, but not insight generating.  My brain jumped to dumping CRM data into Pivot…or a bug database…or a customer evidence set.  Things that were actionable, traditionally hard to search, and would benefit from a visual metaphor.  Then, like a ton of bricks, it hit me.  What about Crunchbase?

Spend a few minutes wandering around Crunchbase, and you realize what an incredibly rich dataset they have assembled, and yet the search and browse interface could be better.  It’s rather simplistic and it’s not possible to dive deeper into a search to refine it.  So that was my project.  I was going to use the Crunchbase API to generate a dataset for Pivot.  Sounded simple enough.  Here’s how I did it, and the result.  (here’s a link to the CXML for those of you with the Pivot browser and who want to see mine in action – WARNING: It takes about 20 seconds to load).

The Code

I have created a CodePlex project for the CrunchBase Grabber, and welcome any additions to the project.

The first problem I had to solve was how to take the JSON objects down and use them in C#.  I normally would have done something like this in Python and used the BeautifulSoup library, but I really wanted to do a soup to nuts C# project, and walk a mile in my customers’ shoes.  It turns out that we have a pretty good object for doing just this.  In the System.Web.Script.Serialization assembly (for which you have to add the reference to System.Web) there is a nice JavaScriptSerializer object.  This was nice to use, but the web information was a bit confusing.  It appears that this was deprecated in .NET 3.0, and then brought back in 3.5.  It’s back and it works.

What I liked about the JavaScriptSerializer was that it could take an arbitrary JSON object in as a stream, and then deserialize to an object of my creation.  I only needed to include the fields that I wanted from the object, so long as the names mapped to the items in the JSON object.  That made creating a custom class for just the data I wanted much easier than enumerating all of the possible data types.

        public string name;
        public string permaLink;
        public string homepage_url;
        public string crunchbase_url;
        public string category_code;
        public string description; // = "";
        public int? number_of_employees; // = 0;
        public string overview;
        public bool deadpool;
        public int? deadpool_year; //= "";
        public imgStructure image;
        public List<locStructure> offices;
        public string tag_list;
        public int? founded_year;
        public List<fndStructure> funding_rounds;
        public Dictionary<string, fndStructure> aggFundStructure = new Dictionary<string,fndStructure>();
        public List<string> keyword_tags;

There’s a couple of things I want to share which will make life a lot easier for you if you plan on using this JavaScriptSerializer.  First, know how to make a type nullable.  If you don’t know what that means, here’s the short version: for any type other than a string, add that “?” after the type and that will allow you to assign a null type to it.  Why is this important?  During the deserialization process, you are bound to hit null types from the stream.  This is especially true if you aren’t in control of the stream, as I wasn’t with Crunchbase.  That’s 4 hours of frustration from my life I just saved you.  I left my comments in there to show you I tried all kinds of things to solve this “assignment of null” exception, and none of them work.  Just use the “?”

Second is understanding the created data types that were used.  Most JSON objects will have nested data structures.  When that happens, you will need to have a new data type that you create with the same name of the data coming back from the object.  In this example, let’s look at the image data:

    public class imgStructure
    {
        public List<List<object>> available_sizes;
        public string attribution;
    }

The available_sizes actually comes back with a set of sizes and a relative file location.  Because there are numbers and text, the List of type object had to be used.  That’s another 3 hours I just saved you.  Here’s the JSON that came back so you can see what I mean:

 "image":
  {"available_sizes":
    [[[150,
       41],
      "assets/images/resized/0000/2755/2755v28-max-150x150.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-250x250.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-450x450.png"]],
   "attribution": null},

Getting at that data would prove difficult.

return baseURL + this.image.available_sizes[1][1].ToString();

 

Because I wanted the middle sized logo, and the location, I had to use the [1][1] to get a string.  Had I wanted the sizes, I would have needed a [1][0][0] or [1][0][1] because the first [0] returns the object which is an array.  Yes, it’s confusing and annoying, but if you know what you want, navigating the complex nested data type can be done.

There were actually two JSON streams I needed to parse.  The first was the Company list, which I retrieved by creating a CompanyGenerator class, which creates the WebRequest to the API to get the company list JSON and then parses that list into a list of company objects.

    public class CompanyGenerator
    {
        //this is how we call out to crunchbase to get their full list of companies
        public List<cbCompanyObject> GetCompanyNames()
        {
            string jsonStream;
            JavaScriptSerializer ser = new JavaScriptSerializer();

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create("http://api.crunchbase.com/v/1/companies.js");

            jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();

            //as opposed to the single company calls, this returns a list of companies, so we have to
            //stick it into a list
            List<cbCompanyObject> jsonCompanies = ser.Deserialize<List<cbCompanyObject>>(jsonStream);

            return jsonCompanies;
        }

    }

Once I had that list, it was a simple matter of iterating over the list and fetching the individual JSON objects per company.

            foreach (cbCompanyObject company in companyNames)
            {
                string jsonLine;

                //with a company name parsed from JSON, create the stream of the company specific JSON
                jsonStream = cjStream.GetJsonStream(company.name);

                if (jsonStream != null)
                {
                    try
                    {
                        //with the stream, now deserialize into the Crunchbase object
                        CrunchBase jsonCrunchBase = ser.Deserialize<CrunchBase>(jsonStream);

                        //assuming that worked, we need to clean up and create some additional meta data
                        jsonCrunchBase.FixCrunchBaseURL();
                        jsonCrunchBase.AggregateFunding();
                        jsonCrunchBase.SplitTagString();

 

Those functions FixCrunchBaseURL(), AggregateFunding() and SplitTagString() were post processing functions meant to get more specific data for my needs.  The AggregateFunding() function was really good times, and an exercise for the reader should you want to enjoy the fun of trying to parse an arbitrary number of nested objects for funding events, and assigning the funding to the right type, and summing the total funding per round.

Since the data is all user generated, and there’s no guarantee that the data is reliable, I had to trap the exception of a company URL simply not existing:

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create(apiUrlBase + companyName + urlEnd);

            try
            {
                jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();
                return jsonStream;
            }
            catch (System.Net.WebException e)
            {
                Console.WriteLine("Company: {0} - URL bad", companyName);
            }

I thought it strange that the company list would return permalinks to companies that are no longer listed in Crunchbase or have a JSON dataset, but as long as you trap the event, things are fine.  Once the data came back and I put it into the object, I could selectively dump data to a text file.

So that’s a simple walk through of how my code accessed the CrunchBase API in preparation for creating my Pivot data set.  Again, I have created a CodePlex project for the CrunchBase Grabber and welcome additions.

Data Set Creation

Knowing what I knew about how the Excel add in worked, I created my text file to have well defined delimiters and column headings.  I couldn’t sort out how to import the HTML which was returned in the JSON for the Company Overview and not have Excel puke on the import.  That’s a nice to have that I will get to at a later time.

It turns out that using the tool to create the columns is less error prone than simply trying to insert them yourself.

image By creating the columns ahead of time, I could simply copy and past from my imported tab delimited file into my Pivot collection.  Here’s another tip – if you have a lot of image locations that are sitting on a server offsite (as in, on the Crunchbase servers) save that copy and paste for last.  By inserting the URLs into the Pivot data set XLS, the Pivot add-in will try to go fetch all of the images, which can take some time.

I processed my text file down from about 15K good entries down to about 4K.  The first criteria was that the company had to have a logo.  Second, it had to have funding and had to have a country, a founding year, and a category listed.  I had been given the heads up that anything more than about 5K objects in a single CXML file would be problematic.

image I also wanted to ensure that some of the fields were not used for filtering but did show up in the information panel.  Luckily the tool made this pretty simple.  By simply moving the cursor to the desired column, you can tick off the check boxes to change where data will appear and how it can be used by the user.  This is a nice touch of the Excel add in tool.

Once the data was all in, I clicked the “Publish Collection” button and wandered off for an age or two.  It took, erm, a little bit of time, even on my jacked up laptop, to process the collection and create the CXML file.  If you have access to the Pivot app, you can point your browser at this URL to see the final result.  For those of you who don’t have access to the Pivot Browser, I have included a few screen caps to show what the resulting dataset looked like.

clip_image001

The first shot is what the full data set renders to in the window.  That’s all 4000 companies, and the Pivot criteria are on the left.  The really cool thing about Pivot is the way you can explore a data set.  Start with the full set of companies, and pivot on the web companies.  Refine that to be only companies in CA and WA.  Decide that you want companies funded between 2004 and 2006, and only those that had between $2 million and $5 million.  You can do that, in real time, and all the data reorganizes itself.  Then you can click on a company logo and get additional information.  Another example screen cap.

clip_image001[6]

All of the filtering happens in real time, and utilizes the DeepZoom technology.  When you change your query criteria, any additional data is fetched via simple HTTP requests, and it’s all quite fast.  For those of you with the Pivot app, you can see how quickly this exploration renders once you have loaded the CXML.

For my Pivot data set, I opted to allow the search to pivot on were: company category, number of employees, city, state, country, year funded, total funding, and keyword tags.  It makes for some good data deep dive.  I want my next iteration to have funding companies as a pivot point as well.  Would be nice to see which investors are in bed together the most.

Put simply, I am stunned by this technology.  I have barely scratched the surface of what is possible with building data sets for Pivot.  I plan to spend quite a bit of my free time in the next few weeks playing with this and thinking about additional data sources to plug into this.  I love that we are building such cool stuff at our company, and I love how accessible it was to an inquisitive mind.  I cannot wait to see what other data sets get created.

Read the original blog entry...

More Stories By Brandon Watson

Brandon Watson is Director for Windows Phone 7. He specifically focuses on developers and the developer platform. He rejoined Microsoft in 2008 after nearly a decade on Wall Street and running successful start-ups. He has both an engineering degree and an economics degree from the University of Pennsylvania, as well as an MBA from The Wharton School of Business, and blogs at www.manyniches.com.

@ThingsExpo Stories
Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Things are changing so quickly in IoT that it would take a wizard to predict which ecosystem will gain the most traction. In order for IoT to reach its potential, smart devices must be able to work together. Today, there are a slew of interoperability standards being promoted by big names to make this happen: HomeKit, Brillo and Alljoyn. In his session at @ThingsExpo, Adam Justice, vice president and general manager of Grid Connect, will review what happens when smart devices don’t work togethe...
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), will provide an overview of various initiatives to certifiy the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldw...
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buyers...
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat...
SYS-CON Events announced today that SD Times | BZ Media has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. BZ Media LLC is a high-tech media company that produces technical conferences and expositions, and publishes a magazine, newsletters and websites in the software development, SharePoint, mobile development and commercial UAV markets.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"I think that everyone recognizes that for IoT to really realize its full potential and value that it is about creating ecosystems and marketplaces and that no single vendor is able to support what is required," explained Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.