Welcome!

Microsoft Cloud Authors: Yeshim Deniz, Janakiram MSV, John Katrick, David H Deans, Andreas Grabner

Related Topics: Microsoft Cloud

Microsoft Cloud: Blog Feed Post

Crunchbase Data Mashed Into Microsoft Pivot

I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was

About two weeks ago I had the good fortune to spend some time at an offsite where I met Gary Flake.  I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t have any idea who he was when I was introduced to him at the offsite.  As one of Microsoft’s Technical Fellows, he’s basically one of the 20 or so smartest engineers in the company.  Spending time with a guy like that is a treat, and this guy thinks about stuff that gets me excited.  Data and systems.

It’s a good thing Gary is so good at his job, because when he gave me the initial pitch for Pivot I thought it sounded about as interesting as a new sorting algorithm [NOTE: the downloads are restricted to token holders, so if you are interested in getting Pivot, hit me up on Twitter and I will get you one].  It wasn’t a great pitch.  Only after I saw the software in action, and lifting my jaw off the floor, did I run back over to Gary and offer to rewrite his 25 word pitch.  My motives were not all together altruistic.  I wanted access to the software, but more importantly I wanted access to the tools to create my own data sets.

The unofficial, not blessed by Microsoft, but how I would talk about Pivot is: a client application to explore user created data sets along multiple criteria in a rich, visual way.  In short, it’s Pivot Tables + Crack + WPF.  The demo datasets that Gary was showing were interesting, but nothing about the data was actionable.  It was informational, but not insight generating.  My brain jumped to dumping CRM data into Pivot…or a bug database…or a customer evidence set.  Things that were actionable, traditionally hard to search, and would benefit from a visual metaphor.  Then, like a ton of bricks, it hit me.  What about Crunchbase?

Spend a few minutes wandering around Crunchbase, and you realize what an incredibly rich dataset they have assembled, and yet the search and browse interface could be better.  It’s rather simplistic and it’s not possible to dive deeper into a search to refine it.  So that was my project.  I was going to use the Crunchbase API to generate a dataset for Pivot.  Sounded simple enough.  Here’s how I did it, and the result.  (here’s a link to the CXML for those of you with the Pivot browser and who want to see mine in action – WARNING: It takes about 20 seconds to load).

The Code

I have created a CodePlex project for the CrunchBase Grabber, and welcome any additions to the project.

The first problem I had to solve was how to take the JSON objects down and use them in C#.  I normally would have done something like this in Python and used the BeautifulSoup library, but I really wanted to do a soup to nuts C# project, and walk a mile in my customers’ shoes.  It turns out that we have a pretty good object for doing just this.  In the System.Web.Script.Serialization assembly (for which you have to add the reference to System.Web) there is a nice JavaScriptSerializer object.  This was nice to use, but the web information was a bit confusing.  It appears that this was deprecated in .NET 3.0, and then brought back in 3.5.  It’s back and it works.

What I liked about the JavaScriptSerializer was that it could take an arbitrary JSON object in as a stream, and then deserialize to an object of my creation.  I only needed to include the fields that I wanted from the object, so long as the names mapped to the items in the JSON object.  That made creating a custom class for just the data I wanted much easier than enumerating all of the possible data types.

        public string name;
        public string permaLink;
        public string homepage_url;
        public string crunchbase_url;
        public string category_code;
        public string description; // = "";
        public int? number_of_employees; // = 0;
        public string overview;
        public bool deadpool;
        public int? deadpool_year; //= "";
        public imgStructure image;
        public List<locStructure> offices;
        public string tag_list;
        public int? founded_year;
        public List<fndStructure> funding_rounds;
        public Dictionary<string, fndStructure> aggFundStructure = new Dictionary<string,fndStructure>();
        public List<string> keyword_tags;

There’s a couple of things I want to share which will make life a lot easier for you if you plan on using this JavaScriptSerializer.  First, know how to make a type nullable.  If you don’t know what that means, here’s the short version: for any type other than a string, add that “?” after the type and that will allow you to assign a null type to it.  Why is this important?  During the deserialization process, you are bound to hit null types from the stream.  This is especially true if you aren’t in control of the stream, as I wasn’t with Crunchbase.  That’s 4 hours of frustration from my life I just saved you.  I left my comments in there to show you I tried all kinds of things to solve this “assignment of null” exception, and none of them work.  Just use the “?”

Second is understanding the created data types that were used.  Most JSON objects will have nested data structures.  When that happens, you will need to have a new data type that you create with the same name of the data coming back from the object.  In this example, let’s look at the image data:

    public class imgStructure
    {
        public List<List<object>> available_sizes;
        public string attribution;
    }

The available_sizes actually comes back with a set of sizes and a relative file location.  Because there are numbers and text, the List of type object had to be used.  That’s another 3 hours I just saved you.  Here’s the JSON that came back so you can see what I mean:

 "image":
  {"available_sizes":
    [[[150,
       41],
      "assets/images/resized/0000/2755/2755v28-max-150x150.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-250x250.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-450x450.png"]],
   "attribution": null},

Getting at that data would prove difficult.

return baseURL + this.image.available_sizes[1][1].ToString();

 

Because I wanted the middle sized logo, and the location, I had to use the [1][1] to get a string.  Had I wanted the sizes, I would have needed a [1][0][0] or [1][0][1] because the first [0] returns the object which is an array.  Yes, it’s confusing and annoying, but if you know what you want, navigating the complex nested data type can be done.

There were actually two JSON streams I needed to parse.  The first was the Company list, which I retrieved by creating a CompanyGenerator class, which creates the WebRequest to the API to get the company list JSON and then parses that list into a list of company objects.

    public class CompanyGenerator
    {
        //this is how we call out to crunchbase to get their full list of companies
        public List<cbCompanyObject> GetCompanyNames()
        {
            string jsonStream;
            JavaScriptSerializer ser = new JavaScriptSerializer();

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create("http://api.crunchbase.com/v/1/companies.js");

            jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();

            //as opposed to the single company calls, this returns a list of companies, so we have to
            //stick it into a list
            List<cbCompanyObject> jsonCompanies = ser.Deserialize<List<cbCompanyObject>>(jsonStream);

            return jsonCompanies;
        }

    }

Once I had that list, it was a simple matter of iterating over the list and fetching the individual JSON objects per company.

            foreach (cbCompanyObject company in companyNames)
            {
                string jsonLine;

                //with a company name parsed from JSON, create the stream of the company specific JSON
                jsonStream = cjStream.GetJsonStream(company.name);

                if (jsonStream != null)
                {
                    try
                    {
                        //with the stream, now deserialize into the Crunchbase object
                        CrunchBase jsonCrunchBase = ser.Deserialize<CrunchBase>(jsonStream);

                        //assuming that worked, we need to clean up and create some additional meta data
                        jsonCrunchBase.FixCrunchBaseURL();
                        jsonCrunchBase.AggregateFunding();
                        jsonCrunchBase.SplitTagString();

 

Those functions FixCrunchBaseURL(), AggregateFunding() and SplitTagString() were post processing functions meant to get more specific data for my needs.  The AggregateFunding() function was really good times, and an exercise for the reader should you want to enjoy the fun of trying to parse an arbitrary number of nested objects for funding events, and assigning the funding to the right type, and summing the total funding per round.

Since the data is all user generated, and there’s no guarantee that the data is reliable, I had to trap the exception of a company URL simply not existing:

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create(apiUrlBase + companyName + urlEnd);

            try
            {
                jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();
                return jsonStream;
            }
            catch (System.Net.WebException e)
            {
                Console.WriteLine("Company: {0} - URL bad", companyName);
            }

I thought it strange that the company list would return permalinks to companies that are no longer listed in Crunchbase or have a JSON dataset, but as long as you trap the event, things are fine.  Once the data came back and I put it into the object, I could selectively dump data to a text file.

So that’s a simple walk through of how my code accessed the CrunchBase API in preparation for creating my Pivot data set.  Again, I have created a CodePlex project for the CrunchBase Grabber and welcome additions.

Data Set Creation

Knowing what I knew about how the Excel add in worked, I created my text file to have well defined delimiters and column headings.  I couldn’t sort out how to import the HTML which was returned in the JSON for the Company Overview and not have Excel puke on the import.  That’s a nice to have that I will get to at a later time.

It turns out that using the tool to create the columns is less error prone than simply trying to insert them yourself.

image By creating the columns ahead of time, I could simply copy and past from my imported tab delimited file into my Pivot collection.  Here’s another tip – if you have a lot of image locations that are sitting on a server offsite (as in, on the Crunchbase servers) save that copy and paste for last.  By inserting the URLs into the Pivot data set XLS, the Pivot add-in will try to go fetch all of the images, which can take some time.

I processed my text file down from about 15K good entries down to about 4K.  The first criteria was that the company had to have a logo.  Second, it had to have funding and had to have a country, a founding year, and a category listed.  I had been given the heads up that anything more than about 5K objects in a single CXML file would be problematic.

image I also wanted to ensure that some of the fields were not used for filtering but did show up in the information panel.  Luckily the tool made this pretty simple.  By simply moving the cursor to the desired column, you can tick off the check boxes to change where data will appear and how it can be used by the user.  This is a nice touch of the Excel add in tool.

Once the data was all in, I clicked the “Publish Collection” button and wandered off for an age or two.  It took, erm, a little bit of time, even on my jacked up laptop, to process the collection and create the CXML file.  If you have access to the Pivot app, you can point your browser at this URL to see the final result.  For those of you who don’t have access to the Pivot Browser, I have included a few screen caps to show what the resulting dataset looked like.

clip_image001

The first shot is what the full data set renders to in the window.  That’s all 4000 companies, and the Pivot criteria are on the left.  The really cool thing about Pivot is the way you can explore a data set.  Start with the full set of companies, and pivot on the web companies.  Refine that to be only companies in CA and WA.  Decide that you want companies funded between 2004 and 2006, and only those that had between $2 million and $5 million.  You can do that, in real time, and all the data reorganizes itself.  Then you can click on a company logo and get additional information.  Another example screen cap.

clip_image001[6]

All of the filtering happens in real time, and utilizes the DeepZoom technology.  When you change your query criteria, any additional data is fetched via simple HTTP requests, and it’s all quite fast.  For those of you with the Pivot app, you can see how quickly this exploration renders once you have loaded the CXML.

For my Pivot data set, I opted to allow the search to pivot on were: company category, number of employees, city, state, country, year funded, total funding, and keyword tags.  It makes for some good data deep dive.  I want my next iteration to have funding companies as a pivot point as well.  Would be nice to see which investors are in bed together the most.

Put simply, I am stunned by this technology.  I have barely scratched the surface of what is possible with building data sets for Pivot.  I plan to spend quite a bit of my free time in the next few weeks playing with this and thinking about additional data sources to plug into this.  I love that we are building such cool stuff at our company, and I love how accessible it was to an inquisitive mind.  I cannot wait to see what other data sets get created.

Read the original blog entry...

More Stories By Brandon Watson

Brandon Watson is Director for Windows Phone 7. He specifically focuses on developers and the developer platform. He rejoined Microsoft in 2008 after nearly a decade on Wall Street and running successful start-ups. He has both an engineering degree and an economics degree from the University of Pennsylvania, as well as an MBA from The Wharton School of Business, and blogs at www.manyniches.com.

@ThingsExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.