Microsoft Cloud Authors: Nick Basinger, Kevin Benedict, Pat Romanski, Liz McMillan, Lori MacVittie

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

MySQL the .NET Way

MySQL the .NET Way

The .NET Framework is an exciting and enabling technology, allowing developers using many different languages and platforms to share tools and components like never before. Open-source efforts to bring .NET to Linux and other platforms are starting to pay off, and I can't think of a better way to use an open-source version of .NET than to work with one of the best opensource databases, MySQL.

There are two major opensource .NET initiatives under way, Mono and Portable.NET. At the moment, the Portable.NET project does not have an ADO.NET stack available, so in this article I'll use Mono as my .NET vehicle of choice. There are several MySQL data providers available, including one built into the Mono package, but I'll focus on a provider from ByteFX, Inc. This provider is unique in that it is open source, written in 100% managed C# code, and Mono compatible.

The sample code included with this article has been tested using Mono for Windows and Linux. You should also note that any mention of the phrase .NET, unless distinctly specified, refers equally to Microsoft .NET and Mono.

Getting Started
One of the best ways to understand a new component is to build something with it. With that in mind, we're going to build a simple app using the ByteFX data provider and Mono. To prepare, install a stable copy of Mono on your system. I used version 0.17 on Windows XP Professional (a nice Windows installer can be found at www.go-mono.com/download.html). Next we'll need a place to work, so create a folder anywhere on your system to hold the files for our app. Finally, download the latest binary release of the data provider from www.sourceforge.net/projects/mysqlnet. You'll need to get version 0.65 or later to be Mono compatible. Unzip the archive into the folder you created (it should include assemblies ByteFX.Data.dll and SharpZipLib.dll) and you're ready to start!

The file ByteFX.Data.dll is the data provider, while SharpZipLib.dll is a support assembly that provides data compression/decompression capability. SharpZipLib is licensed under the LGPL and available from www.icsharpcode.net.

The application we're going to build is a simple console application that takes connection information on the command line and then reads SQL statements from the console and executes them. We'll include the ability to execute the SQL inside a transaction. We'll also discuss using the provider to synchronize changes in a dataset with the MySQL server. I'll call this application MonoME (ME for minimalist example!).

It's Good to Have Connections
Before we can do anything with MySQL, we have to acquire a connection to the database and be authenticated. This is accomplished using the MySQLConnection object. Listing 1 shows the initial layout of MonoME, along with the GetConnection method, which creates a MySQLConnection object and stores it in the static member variable _conn. (The code for this article can be downloaded from www.sys-con.com/dotnet/sourcec.cfm.) There are a number of arguments that can be passed on the command line and used to construct the connection string. The connection string uses the same syntax as that used by the SQL Server provider and allows all the same aliases (server equals data source, password equals pwd, etc). The driver does support a nonstandard option called "use compression". This option instructs the driver and server to use compression for all communications, which can greatly speed up large queries over slow links.

There are two other things to notice here. One is this line near the top:

using ByteFX.Data.MySQLClient;

This line makes it clear that we will be using components of the MySQLClient assembly. The other thing to notice here is the use of try/catch to catch any MySQLClient exceptions that might occur while opening the connection. It should be standard practice in your client applications to wrap database operations in a try/catch block to catch any unforeseen errors. Assuming no exceptions were thrown, we should now be authenticated and ready to roll!

Compiling Listing 1 (All of the code for this article can be downloaded from www.sys-con.com/dotnet/sourcec.cfm.) with Mono couldn't be easier. Simply open up a command line in the folder where you saved Listing 1 and the two assemblies from earlier, and type this:

mcs –r ByteFX.Data.dll Listing1.cs

mcs is the Mono C# compiler and the –r option tells it to include a reference to the ByteFX.Data assembly in the resulting application. Assuming no errors were found, you should now have a new file called Listing1.exe. You can execute this file with the following command (remember to include the parameters GetConnection is looking for):

Mono listing1.exe –u <username> -d <database> -h server

Taking Command
The next thing for MonoME to do is read SQL strings for execution. You didn't come here to read about reading strings off consoles, so I'll just refer you to Listing 2 and skip ahead to the fun part, creating and using MySQLCommand objects.

Each of the SQL strings read in needs to be executed against the database server. This is accomplished with the MySQLCommand object. Commands can be created either by calling _Connection.CreateCommand() or by instantiating one directly. In Listing 2 we've created a method called ExecuteCommand. In this method, we create a MySQLCommand object using the existing connection. We'll use this method to execute the SQL that is entered at the console. This method will print out the resultset in a comma-delimited format.

Creating the MySQLCommand object requires a MySQLConnection object to use and the SQL to execute. These can be provided using properties or passed into the constructor. Once the command object is ready, the SQL can be executed and results obtained using one of three methods. ExecuteNonQuery should be used to execute any nonselect SQL such as delete, insert, or update commands. ExecuteScalar will execute a select command but will return only the first column as a single object. This can be useful to quickly retrieve the value of a builtin function such as MySQL's last_insert_id (). ExecuteReader will execute a select statement and return an object of type MySQLDataReader. This object will allow you to iterate over the resultset, retrieve field information and values, and even return resultset metadata.

An interesting item to consider is the execution of multiple SQL statements in a single call to the MySQLCommand object. This is common using the MySQLDataAdapter class, which we'll discuss a bit later. An example query might be this:

Update test set name='John' where
id=1; select * from test;

The data provider supports this type of operation, but you should be aware of the behavior. Also, multiple nonselect commands can be executed in a single call to Execute NonQuery, and the total number of records affected for the entire batch is returned. It's currently impossible to retrieve affected record counts for the individual SQL statements. Multiple select statements can be executed with ExecuteReader, but only the last statement will actually be loaded into the reader for processing.

It's All About the Data
MySQLDataReader is a forwardonly, nonbuffering class that retrieves the results of a select SQL command against a MySQL data source. For MonoME, we want to display the field data in a reasonably nice format. To do that, we're going to need the number and names of the fields contained in the resultset. This short bit of code gives us the name of each of the columns.

for (int x=0; x <
reader.FieldCount; x++)
String fieldname =

Now that we know the column names, we want to retrieve and show the data. The procedure is very similar to the older ADO way of traversing recordsets. The following code shows how to cycle through the recordset, displaying column values.

for (int col=0; col <
reader.FieldCount; col++)
object o = reader.GetValue(col);
Console.Write( o.ToString() );

The type of object returned by GetValue depends on the type of that column. You could read the value into a particular type using the same GetValue method, but then you'd need to cast like this:

UInt32 id = (UInt32)reader.GetValue(col);

To avoid casting every column, the data reader has several helper methods available that return a column's value in a particular type. Here are two examples:

String s = reader.GetString(col);
Int32 id = reader.GetInt32(col);

The actual bytes that make up the value of a column are also available using the GetBytes method. Note that GetBytes can return the data for any field type; it's the only way to get the data for BLOB (binary large object) and TEXT (which are also BLOB) fields.

MySQLDataReader also implements the IEnumerable interface. This interface is critical to proper execution of the data reader in all environments. The foreach keyword in C# provides a good example of such an environment. In C#, foreach is used to iterate over a set of items. Here's a brief example:

foreach (char c in "Hello")
//do something with c

Because MySQLDataReader implements the IEnumerable interface, we can write this:

foreach (IDataRecord rec in reader)

In the example above, IDataRecord is an interface defined in the .NET Framework and represents a single record of a resultset.

When All Else Fails, Use an Adapter
MonoME is pretty simple, but in a real-world application data access and manipulation is more complex. One of the tools .NET gives you to deal with this complexity is the data adapter. The ByteFX driver provides a compliant implementation in the MySQLDataAdapter class.

As you have already learned in previous issues of .NET Developer's Journal, the DataSet is a very powerful object used to contain data and changes made to that data in one or more tables. DataSet objects are filled with data by calling the data adapter's Fill method and synchronized with the data source through the Update method. Listing 3 is an example of a fully populated MySQLDataAdapter. It is almost identical to the code you would use for SqlDataAdapter (SQL Server).

The key thing to notice in Listing 3 is that MySQLCommand objects are constructed for each command type: select, insert, update, and delete. While this is not required, it is advisable because it enables the adapter to handle any type of change that occurs in the data tables covered by that adapter. When you have changes in your dataset, you would call the Update method.

// change the first column
ds.Tables["test"].Rows[0][0] = 2;
DataSet changes = ds.GetChanges();

GetChanges is a method of the DataSet object and returns a DataSet object containing all the changed rows (newly added, changed, or deleted). This is an optimization, as it prevents large amounts of data from being sent back to the data source when only a few rows may have changed. The Update method sends those changes to the data source, calling one of the command objects we created in each case. The Merge method on the next line appears unnecessary but is really very important. In many cases, the server may update information in the changed data (such as an autoincrement field), and we want to capture those changes. Finally, we accept the changes in our local dataset using AcceptChanges. This gets us back to an unchanged state.

I'm sure you've noticed that the code required to create our data adapter, while not bad in this example, could get quite lengthy. That's where MySQLCommandBuilder helps out. In a typical scenario in which you're accessing a single table, you can skip building command objects for update, insert, and delete and let MySQLCommandBuilder do the work for you. To use it, you need to supply your adapter with a select command and then register the builder. Here's how:

da.SelectCommand = new
MySQLCommand("select * from
test", conn);
MySQLCommandBuilder cb = new
MySQLCommandBuilder( da );
da.Update(ds); //
update changes

The command builder uses the select command to retrieve enough metadata from the data source to construct the remaining commands. The command builder needs the select command to include a primary key or unique column. In fact, if one isn't present an Invalid Operation exception will be thrown and the commands will not be generated. It's also important to note that MySQLCommandBuilder can be used only for single-table scenarios.

Real Programmers Use Parameters
In order for the MySQLCommand objects to be generic and not tied to a single record, we need to use parameters. The data provider includes the classes MySQLParameter and MySQLParameterCollection. The MySQLCommand class includes a property called Parameters that is used to assign parameters to the command. Also, when assigning parameters, you'll need to give the corresponding column name from the table and the data type of the column. The data type given comes from the MySQLDbType enumeration. Here is an example of adding a parameter to a command object.


In this example, we're adding a new parameter with the name "@title" mapped to the column "title" of type VarChar. The MySQL data provider also provides support for specifying which version of a parameter to use during update procedures. To accomplish this, you must use one of the DataRowVersion constants to specify which value of the parameter to use. For example:

ParameterDirection.Input, "id",
, null));

This would be a very common parameter for an update command since you want to update field values for an existing row specified by a given ID. Here we are defining a parameter to be equal to the original value of the "id" column. See Listing 3 for an example of a command that uses this parameter.

Improvements to MonoME
There are several ways MonoME could be improved; we'll cover two as we finish up. The first improvement is really fixing a problem. The problem is that not all commands given to MonoME will be select commands. While not a complete solution, simply checking for the word "select" at the start of the command and choosing the correct method of MySQLCommand should help.

The other improvement we'll cover is the inclusion of transactions. MySQL doesn't support transactions on all table types, but more and more servers are using the InnoDB table types, so transactions are possible. The ByteFX MySQL data provider supports transactions using the MySQLTransaction class. The usage is identical to the SqlTransaction (SQL server) class. A transaction object is created by calling _Connection.BeginTransaction. Here's how that would look:

MySQLTransaction myTrans;
// Start a local transaction
myTrans = myConnection.BeginTransaction();
// Must assign both transaction object and connection
// to Command object for a pending local transaction
myCommand.Connection = myConnection;
myCommand.Transaction = myTrans;

Note that both the connection and the transaction must be assigned to the command object before the command is executed. Once the command is executed, either a commit or rollback should be performed. Take a look:

... command text has already been assigned....
catch(MySQLException e)

Here we catch any exceptions caused by the command and perform a rollback in that case.

In the context of MonoME, multiple commands could be given at once, separated by a ";" character. Executing those commands inside a transaction would guarantee data integrity on the server.

Wrap Up
In this article, we've taken a brief look at using Mono and the ByteFX MySQL data provider to access your MySQL data. There's a lot we didn't cover and I would encourage you to investigate Mono and the different data providers that are available. To aid you in your discovery, I have intentionally left out the final code listing of MonoME. You have all the parts from the listings; they just need to be assembled and compiled. .NET is no longer coming to non-MS platforms. It's here and it is exciting!

More Stories By Reggie Burnett

Reggie Burnett is a .NET architect for MySQL, Inc. Reggie wrote the original ByteFX ADO.NET provider for MySQL and currently maintains that code under its current name of MySQL Connector/Net.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...