Welcome!

Microsoft Cloud Authors: Stackify Blog, Liz McMillan, David H Deans, Automic Blog, Pat Romanski

Related Topics: Microsoft Cloud

Microsoft Cloud: Blog Feed Post

Crunchbase Data Mashed Into Microsoft Pivot

I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was

About two weeks ago I had the good fortune to spend some time at an offsite where I met Gary Flake.  I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was when I was introduced to him at the offsite.  As one of Microsoft’s Technical Fellows, he’s basically one of the 20 or so smartest engineers in the company.  Spending time with a guy like that is a treat, and this guy thinks about stuff that gets me excited.  Data and systems.

It’s a good thing Gary is so good at his job, because when he gave me the initial pitch for Pivot I thought it sounded about as interesting as a new sorting algorithm [NOTE: the downloads are restricted to token holders, so if you are interested in getting Pivot, hit me up on Twitter and I will get you one].  It wasn’t a great pitch.  Only after I saw the software in action, and lifting my jaw off the floor, did I run back over to Gary and offer to rewrite his 25 word pitch.  My motives were not all together altruistic.  I wanted access to the software, but more importantly I wanted access to the tools to create my own data sets.

The unofficial, not blessed by Microsoft, but how I would talk about Pivot is: a client application to explore user created data sets along multiple criteria in a rich, visual way.  In short, it’s Pivot Tables + Crack + WPF.  The demo datasets that Gary was showing were interesting, but nothing about the data was actionable.  It was informational, but not insight generating.  My brain jumped to dumping CRM data into Pivot…or a bug database…or a customer evidence set.  Things that were actionable, traditionally hard to search, and would benefit from a visual metaphor.  Then, like a ton of bricks, it hit me.  What about Crunchbase?

Spend a few minutes wandering around Crunchbase, and you realize what an incredibly rich dataset they have assembled, and yet the search and browse interface could be better.  It’s rather simplistic and it’s not possible to dive deeper into a search to refine it.  So that was my project.  I was going to use the Crunchbase API to generate a dataset for Pivot.  Sounded simple enough.  Here’s how I did it, and the result.  (here’s a link to the CXML for those of you with the Pivot browser and who want to see mine in action – WARNING: It takes about 20 seconds to load).

The Code

I have created a CodePlex project for the CrunchBase Grabber, and welcome any additions to the project.

The first problem I had to solve was how to take the JSON objects down and use them in C#.  I normally would have done something like this in Python and used the BeautifulSoup library, but I really wanted to do a soup to nuts C# project, and walk a mile in my customers’ shoes.  It turns out that we have a pretty good object for doing just this.  In the System.Web.Script.Serialization assembly (for which you have to add the reference to System.Web) there is a nice JavaScriptSerializer object.  This was nice to use, but the web information was a bit confusing.  It appears that this was deprecated in .NET 3.0, and then brought back in 3.5.  It’s back and it works.

What I liked about the JavaScriptSerializer was that it could take an arbitrary JSON object in as a stream, and then deserialize to an object of my creation.  I only needed to include the fields that I wanted from the object, so long as the names mapped to the items in the JSON object.  That made creating a custom class for just the data I wanted much easier than enumerating all of the possible data types.

        public string name;
        public string permaLink;
        public string homepage_url;
        public string crunchbase_url;
        public string category_code;
        public string description; // = "";
        public int? number_of_employees; // = 0;
        public string overview;
        public bool deadpool;
        public int? deadpool_year; //= "";
        public imgStructure image;
        public List<locStructure> offices;
        public string tag_list;
        public int? founded_year;
        public List<fndStructure> funding_rounds;
        public Dictionary<string, fndStructure> aggFundStructure = new Dictionary<string,fndStructure>();
        public List<string> keyword_tags;

There’s a couple of things I want to share which will make life a lot easier for you if you plan on using this JavaScriptSerializer.  First, know how to make a type nullable.  If you don’t know what that means, here’s the short version: for any type other than a string, add that “?” after the type and that will allow you to assign a null type to it.  Why is this important?  During the deserialization process, you are bound to hit null types from the stream.  This is especially true if you aren’t in control of the stream, as I wasn’t with Crunchbase.  That’s 4 hours of frustration from my life I just saved you.  I left my comments in there to show you I tried all kinds of things to solve this “assignment of null” exception, and none of them work.  Just use the “?”

Second is understanding the created data types that were used.  Most JSON objects will have nested data structures.  When that happens, you will need to have a new data type that you create with the same name of the data coming back from the object.  In this example, let’s look at the image data:

    public class imgStructure
    {
        public List<List<object>> available_sizes;
        public string attribution;
    }

The available_sizes actually comes back with a set of sizes and a relative file location.  Because there are numbers and text, the List of type object had to be used.  That’s another 3 hours I just saved you.  Here’s the JSON that came back so you can see what I mean:

 "image":
  {"available_sizes":
    [[[150,
       41],
      "assets/images/resized/0000/2755/2755v28-max-150x150.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-250x250.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-450x450.png"]],
   "attribution": null},

Getting at that data would prove difficult.

return baseURL + this.image.available_sizes[1][1].ToString();

 

Because I wanted the middle sized logo, and the location, I had to use the [1][1] to get a string.  Had I wanted the sizes, I would have needed a [1][0][0] or [1][0][1] because the first [0] returns the object which is an array.  Yes, it’s confusing and annoying, but if you know what you want, navigating the complex nested data type can be done.

There were actually two JSON streams I needed to parse.  The first was the Company list, which I retrieved by creating a CompanyGenerator class, which creates the WebRequest to the API to get the company list JSON and then parses that list into a list of company objects.

    public class CompanyGenerator
    {
        //this is how we call out to crunchbase to get their full list of companies
        public List<cbCompanyObject> GetCompanyNames()
        {
            string jsonStream;
            JavaScriptSerializer ser = new JavaScriptSerializer();

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create("http://api.crunchbase.com/v/1/companies.js");

            jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();

            //as opposed to the single company calls, this returns a list of companies, so we have to
            //stick it into a list
            List<cbCompanyObject> jsonCompanies = ser.Deserialize<List<cbCompanyObject>>(jsonStream);

            return jsonCompanies;
        }

    }

Once I had that list, it was a simple matter of iterating over the list and fetching the individual JSON objects per company.

            foreach (cbCompanyObject company in companyNames)
            {
                string jsonLine;

                //with a company name parsed from JSON, create the stream of the company specific JSON
                jsonStream = cjStream.GetJsonStream(company.name);

                if (jsonStream != null)
                {
                    try
                    {
                        //with the stream, now deserialize into the Crunchbase object
                        CrunchBase jsonCrunchBase = ser.Deserialize<CrunchBase>(jsonStream);

                        //assuming that worked, we need to clean up and create some additional meta data
                        jsonCrunchBase.FixCrunchBaseURL();
                        jsonCrunchBase.AggregateFunding();
                        jsonCrunchBase.SplitTagString();

 

Those functions FixCrunchBaseURL(), AggregateFunding() and SplitTagString() were post processing functions meant to get more specific data for my needs.  The AggregateFunding() function was really good times, and an exercise for the reader should you want to enjoy the fun of trying to parse an arbitrary number of nested objects for funding events, and assigning the funding to the right type, and summing the total funding per round.

Since the data is all user generated, and there’s no guarantee that the data is reliable, I had to trap the exception of a company URL simply not existing:

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create(apiUrlBase + companyName + urlEnd);

            try
            {
                jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();
                return jsonStream;
            }
            catch (System.Net.WebException e)
            {
                Console.WriteLine("Company: {0} - URL bad", companyName);
            }

I thought it strange that the company list would return permalinks to companies that are no longer listed in Crunchbase or have a JSON dataset, but as long as you trap the event, things are fine.  Once the data came back and I put it into the object, I could selectively dump data to a text file.

So that’s a simple walk through of how my code accessed the CrunchBase API in preparation for creating my Pivot data set.  Again, I have created a CodePlex project for the CrunchBase Grabber and welcome additions.

Data Set Creation

Knowing what I knew about how the Excel add in worked, I created my text file to have well defined delimiters and column headings.  I couldn’t sort out how to import the HTML which was returned in the JSON for the Company Overview and not have Excel puke on the import.  That’s a nice to have that I will get to at a later time.

It turns out that using the tool to create the columns is less error prone than simply trying to insert them yourself.

image By creating the columns ahead of time, I could simply copy and past from my imported tab delimited file into my Pivot collection.  Here’s another tip – if you have a lot of image locations that are sitting on a server offsite (as in, on the Crunchbase servers) save that copy and paste for last.  By inserting the URLs into the Pivot data set XLS, the Pivot add-in will try to go fetch all of the images, which can take some time.

I processed my text file down from about 15K good entries down to about 4K.  The first criteria was that the company had to have a logo.  Second, it had to have funding and had to have a country, a founding year, and a category listed.  I had been given the heads up that anything more than about 5K objects in a single CXML file would be problematic.

image I also wanted to ensure that some of the fields were not used for filtering but did show up in the information panel.  Luckily the tool made this pretty simple.  By simply moving the cursor to the desired column, you can tick off the check boxes to change where data will appear and how it can be used by the user.  This is a nice touch of the Excel add in tool.

Once the data was all in, I clicked the “Publish Collection” button and wandered off for an age or two.  It took, erm, a little bit of time, even on my jacked up laptop, to process the collection and create the CXML file.  If you have access to the Pivot app, you can point your browser at this URL to see the final result.  For those of you who don’t have access to the Pivot Browser, I have included a few screen caps to show what the resulting dataset looked like.

clip_image001

The first shot is what the full data set renders to in the window.  That’s all 4000 companies, and the Pivot criteria are on the left.  The really cool thing about Pivot is the way you can explore a data set.  Start with the full set of companies, and pivot on the web companies.  Refine that to be only companies in CA and WA.  Decide that you want companies funded between 2004 and 2006, and only those that had between $2 million and $5 million.  You can do that, in real time, and all the data reorganizes itself.  Then you can click on a company logo and get additional information.  Another example screen cap.

clip_image001[6]

All of the filtering happens in real time, and utilizes the DeepZoom technology.  When you change your query criteria, any additional data is fetched via simple HTTP requests, and it’s all quite fast.  For those of you with the Pivot app, you can see how quickly this exploration renders once you have loaded the CXML.

For my Pivot data set, I opted to allow the search to pivot on were: company category, number of employees, city, state, country, year funded, total funding, and keyword tags.  It makes for some good data deep dive.  I want my next iteration to have funding companies as a pivot point as well.  Would be nice to see which investors are in bed together the most.

Put simply, I am stunned by this technology.  I have barely scratched the surface of what is possible with building data sets for Pivot.  I plan to spend quite a bit of my free time in the next few weeks playing with this and thinking about additional data sources to plug into this.  I love that we are building such cool stuff at our company, and I love how accessible it was to an inquisitive mind.  I cannot wait to see what other data sets get created.

Read the original blog entry...

More Stories By Brandon Watson

Brandon Watson is Director for Windows Phone 7. He specifically focuses on developers and the developer platform. He rejoined Microsoft in 2008 after nearly a decade on Wall Street and running successful start-ups. He has both an engineering degree and an economics degree from the University of Pennsylvania, as well as an MBA from The Wharton School of Business, and blogs at www.manyniches.com.

@ThingsExpo Stories
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in compute, storage and networking technologies, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
The age of Digital Disruption is evolving into the next era – Digital Cohesion, an age in which applications securely self-assemble and deliver predictive services that continuously adapt to user behavior. Information from devices, sensors and applications around us will drive services seamlessly across mobile and fixed devices/infrastructure. This evolution is happening now in software defined services and secure networking. Four key drivers – Performance, Economics, Interoperability and Trust ...
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves.
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deli...
Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the USA and Europe, we work with a variety of customers from emerging startups to Fortune 1000 companies.
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
@ThingsExpo has been named the Most Influential ‘Smart Cities - IIoT' Account and @BigDataExpo has been named fourteenth by Right Relevance (RR), which provides curated information and intelligence on approximately 50,000 topics. In addition, Right Relevance provides an Insights offering that combines the above Topics and Influencers information with real time conversations to provide actionable intelligence with visualizations to enable decision making. The Insights service is applicable to eve...
SYS-CON Events announced today that Hitachi Data Systems, a wholly owned subsidiary of Hitachi LTD., will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City. Hitachi Data Systems (HDS) will be featuring the Hitachi Content Platform (HCP) portfolio. This is the industry’s only offering that allows organizations to bring together object storage, file sync and share, cloud storage gateways, and sophisticated search an...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.