Monday, January 14, 2013

Internal Temperature on a SCADAPack 334E

There's an internal thermometer on the Schneider SCADAPack 334 E-Series RTUs. I spent ages trying to find out how to access it and read the temperature on the RTU.

And then, hidden in a release note, I found it. It is an internal analogue DNP3 point with a point number of 50062.

Random, but it worked!

Friday, December 21, 2012

XML (de)Serialization - A list of a base object, containing a mix of derived objects.

So here's the problem. I've got an XML file containing a list of basic shapes I need to draw in my application. I've broken the shapes down to different classes, but stuck them in a single XML list.

Here's an example XML:
<Device>
  <Shape>       
    <Line Colour="blue">
      <Point x="29" y="55"/>
      <Point x="43" y="55"/>
    </Line>
    <Ellipse Colour="yellow">
      <Point x="44" y="50"/>
      <Point x="53" y="59"/>
    </Ellipse>
    <Triangle Colour="red">       
      <Point x="1456" y="191"/>
      <Point x="1456" y="201"/>
      <Point x="1465" y="201"/>
    </Triangle>
  </Shape>
</Device>
Each device has a shape which is a list of different drawing objects. The simple way to do this would be to have a list for each object type (Line, Ellipse, Triangle) but that's not what I wanted. The order of the XML is also the order of drawing on the screen, so I wanted these to remain in a single list as a grouping of objects, derived from a simple object class.

Just being lazy and using [XmlElement] on the Shape list in a C# class did not work, so I had to go deeper. First, let's have a look at my objects. I defined my own Point class, instead of using System.Drawing.Point, just so they could be represented as attributes in my XML (a design decision).
    public sealed class Point
    {
        [XmlAttribute]
        public int x
        {
            get;
            set;
        }
        [XmlAttribute]
        public int y
        {
            get;
            set;
        }
    }
I then created a base drawing object, with a colour and a list of points. Because the size of the Point array changes based on each derived object, the XML Serialiser ignores the Point array in the base class.
    public class DrawingObject
    {
        [XmlAttribute]
        public string Colour
        {
            get;
            set;
        }
        [XmlIgnore]
        public Point[] Points;
    }   
Now, I derive each specific object from this base class. To set the size of the Point array, I use a private field and then modify the base array to become a getter, using the 'new' keyword. The XmlElement is defined in these derived classes for the Serialiser (and yes, I realise the Ellipse is the same as a Line, but there's other code I removed for this example. It still serves the point of showing different derived classes). 
    public sealed class Line : DrawingObject
    {
        private Point[] _points = new Point[2];
        [XmlElement("Point")]
        new public Point[] Points
        {
            get
            {
                return _points;
            }
            set
            {
                _points = value;
            }
        }
    }
    public sealed class Triangle : DrawingObject
    {
        private Point[] _points = new Point[3];
        [XmlElement("Point")]
        new public Point[] Points
        {
            get
            {
                return _points;
            }
            set
            {
                _points = value;
            }
        }
    }   
    public sealed class Ellipse : DrawingObject
    {
        private Point[] _points = new Point[2];
        [XmlElement("Point")]
        new public Point[] Points
        {
            get
            {
                return _points;
            }
            set
            {
                _points = value;
            }
        }
    }
Very good. Now, let's make a list of the base object and force the XML Serialiser to add the different element names (Line, Triangle, Ellipse) to the single list. This is when we hit our first slightly different XML definition. To get this to work, .NET makes us add an enumeration which is ignored by the XML. The XML Serialiser then uses this to help detect what object type it is (http://msdn.microsoft.com/en-us/library/system.xml.serialization.xmlchoiceidentifierattribute%28v=vs.100%29.aspx).

So we define a public enumeration of the different object types:
    [XmlType(IncludeInSchema = false)]
    public enum ShapeChoiceType
    {
        Line,
        Triangle,
        Ellipse
    }
Then in our serializing Shape class we add an array of this enumeration, so it can be matched with the list being serialised. But we get the Serialiser to ignore it.
    // Do not serialize this next field:
    [XmlIgnore]
    public List ItemType;
Finally we add the List! We have to use the XmlChoiceIdentifier, pointing to our List of ItemTypes, to help cast the objects. In our XmlElement definition, we specify the name of each object type, as well as what the C# type will be.
    [XmlElement("Line", typeof(Line))]
    [XmlElement("Triangle", typeof(Triangle))]
    [XmlElement("Ellipse", typeof(Ellipse))]
    [XmlChoiceIdentifier("ItemType")]
    public List DrawingObjects
    {
        get;
        set;
    }
This builds all fine! But the first time you try to deserialise the XML in the application, we get an error! Oh dear. With XML, the CLR tends to compile the XML classes at run-time.
    System.InvalidOperationException was caught
      Message=Unable to generate a temporary class (result=1).
    error CS1061: 'System.Collections.Generic.List' does not contain a definition for 'Length' and no extension method 'Length' accepting a first argument of type 'System.Collections.Generic.List' could be found (are you missing a using directive or an assembly reference?)
So, what does this mean? For reasons I'm not going into, I use List for my collections. However, List does not work with the XmlChoiceIdentifier. This Microsoft bug report (http://connect.microsoft.com/VisualStudio/feedback/details/681487/xmlserializer-consider-that-an-element-adorned-with-xmlchoiceidentifier-could-be-an-ienumerable-or-an-icollection-but-code-generation-fail) shows that by design, it needs to be an array. So, let's change it to arrays. And hey presto, it works!

Final class definitions below!

    public sealed class Point
    {
        [XmlAttribute]
        public int x
        {
            get;
            set;
        }
        [XmlAttribute]
        public int y
        {
            get;
            set;
        }
    }
   
    public class DrawingObject
    {
        [XmlAttribute]
        public string Colour
        {
            get;
            set;
        }
        [XmlIgnore]
        public Point[] Points;
    }
   
    public sealed class Line : DrawingObject
    {
        private Point[] _points = new Point[2];
        [XmlElement("Point")]
        new public Point[] Points
        {
            get
            {
                return _points;
            }
            set
            {
                _points = value;
            }
        }
    }

    public sealed class Triangle : DrawingObject
    {
        private Point[] _points = new Point[3];
        [XmlElement("Point")]
        new public Point[] Points
        {
            get
            {
                return _points;
            }
            set
            {
                _points = value;
            }
        }
    }
   
    public sealed class Ellipse : DrawingObject
    {
        private Point[] _points = new Point[2];
        [XmlElement("Point")]
        new public Point[] Points
        {
            get
            {
                return _points;
            }
            set
            {
                _points = value;
            }
        }
    }
   
    [XmlType(IncludeInSchema = false)]
    public enum ShapeChoiceType
    {
        Line,
        Triangle,
        Ellipse
    }
   
    public sealed class Shape
    {
        [XmlElement("Line", typeof(Line))]
        [XmlElement("Triangle", typeof(Triangle))]
        [XmlElement("Ellipse", typeof(Ellipse))]
        [XmlChoiceIdentifier("ItemType")]
        public DrawingObject[] DrawingObjects
        {
            get;
            set;
        }

        // Do not serialize this next field:
        [XmlIgnore]
        public ShapeChoiceType[] ItemType;
    }

    public sealed class Device
    {
        [XmlElement("Shape")]
        public List Shapes
        {
            get;
            set;
        }
    }

Tuesday, July 10, 2012

QNAP MySQL issues

I was having some serious issues with connecting to a MySQL database running on a QNAP NAS. The programs I wrote were holding up whenever they connected. In further debugging it could be seen that the connection would open successfully, but take up to a minute, pausing all other threads in the program. This change only happened a day ago, where two things happened: I added it to a domain and our intranet DNS servers were changed.


So I played around with the domain for a while, but nope. That didn't make a difference. Obviously not, that’s just for file sharing. The DNS setting in QNAP was set correctly, so it couldn't have been the DNS, right? Well after a few hours of frustration and database reinitializing, I googled harder.

And got this: http://stackoverflow.com/questions/1292856/why-connect-to-mysql-is-so-slow. The MySQL DNS doesn't bother with the settings in QNAP. So all I had to do was change the MySQL configuration my.cnf file on the QNAP file system. Which I had no idea how to access.

Luckily there was another Google result that helped: http://forum.qnap.com/viewtopic.php?p=124900

So I downloaded PuTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html), SSH'ed into the NAS, navigated to /etc/config and then ran VI on my.cnf. Of course, it'd been a decade + since I used VI and it's interesting key combinations. A university had it all written out for me (http://www.washington.edu/computing/unix/vi.html). I picked an abritrary line at the start of the my.cnf file, added

skip-name-resolve
saved it, restarted the server and hey presto! Connections were almost instantaneous again.

Thursday, October 13, 2011

SCADAPack E-Series

I've been developing some remote field RTU software for the Schneider/Control Microsystems SCADAPack 334E. This is a nifty little device that let's you program in about 6 different styles using the IEC 61131-3 protocol in a software package called ISaGRAF. I've been sticking to function blocks because it's neat and new for me and easy on the eye. There's a USB host port on the front, so I thought it'd be easy to program the device to log data to a USB key on it, right? Especially considering there's an example on the internet for it.

Well, I was wrong.

I was trying to get this example to work on my SCADAPack: http://resourcecenter.controlmicrosystems.com/display/public/SoftwareTools/Data+Log+Using+FBD+Language. I originally built the code on ISaGRAF for E-Series 7.83 but every time I went to debug the Data Log Using FBD Language program it would upload the code to 100% and then I would get the following error messages:

Further digging showed me that the dlog and dlogcnfg weren’t installed at all in ISaGRAF. The function blocks were there in the code I downloaded, but they had no back end to them. They weren’t in the Library at all.

I then went to the DVD that came with USB licence key and installed the software on there. This one said “IEC61131-3 Programming language suite for SCADAPack controllers”. Previously I was using a downloaded ISaGRAF from their webpage because I started developing in demo mode. Opening the DLog project, the dlog and dlogcnfg function blocks were there. But then when I went to debug the program, there was no option for “Configurator” in the debug Communication port link parameters!

I had a bit of a head scratch, then tried importing the libraries from the DVD version of ISaGRAF to the downloaded E-series ISaGRAF. Using the “Libraries” program of the E-Series version, I restored from archive all of the dlog associated C function blocks from C:\ISAWIN\LIB\SOURCE\ (the DVD version’s location). Going back into the programs in the downloaded E-Series version, the dlog and dlogcnfg function blocks were there!

But I went to debug via the configurator, and I came across the same debugging errors:

Frustrated, I gave a quick Google which lead to this webpage: http://resourcecenter.controlmicrosystems.com/display/public/SoftwareTools/ISAGRAF+Function+block+not+implemented. I had a look at Error #66, which says:
A program is using a C function block, which is unknown in the target. Your workbench library may not correspond to your target version.
At this point, after hitting my head against the keyboard repeatedly, I decided to get a hold of tech support. Where I was able to find out:
The problem here, in a nutshell, is that the E-Series RTUs do not support the dlog functionality.

This functionality is native to the SCADAPack controllers, but the E-Series, which are born from entirely different firmware, do not have an implementation of dlog.

The E-Series controllers DO however have some data logging capabilities.

If you open the E-Series Configurator Reference Manual, take a look here: E-Series Technical Reference Manuals > SCADAPack E-Series Trend Sampler Technical Reference
Cool, well I can do logging, but it's not software controlled. It's on a timer as trend data. Not as flexible as I would have liked, and it does not save on the USB.

So I asked what I could do with the USB host port. And I was told
Unfortunately the USB Host port is not currently supported by the E-Series operating system.

It may be supported in the future, but I’m not sure when.
Excellent. Not in use at all. Despite being ranted and raved over in the documentation and advertising.

Apart from this though, the SCADAPacks are great little devices for field units!

Monday, September 5, 2011

SQLite

I was experimenting using SQLite in C#, which is surprisingly easy. This blog post details all you need, with a Hans Moleman football to the groin reference and a Jackie Chan photoshop. What more do you need to learn SQLite?!
http://www.mikeduncan.com/sqlite-on-dotnet-in-3-mins/

Wednesday, August 31, 2011

Massive C# link dump

The other day I wrote a small C# GUI test app to analyse the speed and writing abilities of different data storage methods for sharing between different processes and computers. The idea being that two almost isolated devices (except for one open port for file sharing on a NAS) can share information between each other. This meant no messaging queues and no database servers.

My initial investigation was comparing writing to a shared XML file and a shared Access file (this is now being expanded to include SQLlite). It needs to be a file that can easily be removed, backed up and still accessed by both devices at the same time. In the process of doing this, I ended up Googling for about 10 things I do constantly in C# but never remember. This blog post is now going to be the mighty link dump of them all for future reference, and why they were good.

First off, I had to generate mass amounts of data quickly to flood the shared file from both devices. I used the good old random number generator, which for some reason I can never commit to memory. This website has the function I use in almost every project that requires random (http://www.c-sharpcorner.com/UploadFile/mahesh/RandomNumber11232005010428AM/RandomNumber.aspx).

I normally commit my application settings to a custom XML file or the Windows Registry. I thought with this application I would be trickier (so I can just copy it across or share it through the NAS) and use Visual Studio's in built App.Config settings (http://www.codeproject.com/KB/cs/SystemConfiguration.aspx). I had never used this before, but I was shocked at how easy and versatile it is!

I created a test class for the randomly generated data. My first full test was to see how the system held up writing mass I/O to a shared XML file. Serializing the file to XML is easy, but most of my work puts it to a binary array for sending via sockets or other communications devices. Saving (as well as reading) to an actual XML file is a bit more work, but easy thanks to this website (http://codesamplez.com/programming/serialize-deserialize-c-sharp-objects).

Now that the application was reading and writing simultaneously, there were of course issues with file locks due to StreamReader and StreamWriter. Lucky, there's a work around for StreamReader locks (http://bytes.com/topic/c-sharp/answers/510916-streamreader-avoiding-ioexception-due-external-lock).

That worked and I got some good test data, even if the results were exactly as I predicted them to be (this will be another post when all my tests are complete).

The next test was doing the same thing, but storing it in Microsoft Access 2007 tables instead of XML. I did a lot of research into Access (it had been a while since I used it) and found lots of details and limitations of it (http://databases.aspfaq.com/database/what-are-the-limitations-of-ms-access.html).

Then I had to connect to it. Luckily there's a website which details pretty much every connection string you'll ever need for any database operations (http://www.connectionstrings.com/access-2007).

Databases have different time fields than .Net defaults. Whenever writing data to a DateTime field in a database I generally manually format the data in a custom ToString() call. Here's a website which details all you need to know about formatting .Net DateTime objects in whatever style you so fancy (http://www.csharp-examples.net/string-format-datetime/).

Finally, bulk MS Access read/writes/deletes cause the file to bloat. It won't shrink back down unless you compact it. This is generally done in the Access software, sometimes on file close, but in a programmatic environment it never happens automatically. So you've got to do it all yourself in code (http://techieyogi.blogspot.com/2009/11/how-to-compact-access-2007-database.html).

Wednesday, July 27, 2011

C#: System.Data.SqlClient.SqlException : Must declare the table variable

So now I'm doing more and more in C# I'm trying to do it properly. As in not just taking my old habits and writing it in C# code, but using all sorts of C# proper style. Since a lot of my work is with databases right now, this means I'm experimenting a lot with LINQ and proper SQL queries.

I used to build SQL queries as pure strings, just passing in the variables directly as string concatenations. This is unsafe. Hugely so. Imagine if someone jimmied your input code? They could put whatever they want into that query. This isn't too big a problem for me, as my code runs as a background service hidden on a computer behind about 30,000 firewalls that's probably ever only going to be accessed by me and my boss.

But still, the proper way to do it is to set up a query in advance and pass in parameters, which is basically an abstraction layer of variables in the SQL query. Observe:

backupQuery = "SELECT tableData FROM @storageTableName WHERE dataSource = @dataSource AND dbName = @dbName AND tableName = @tableName";
backupCommand = new SqlCommand(backupQuery, backupConnection);
backupCommand.Parameters.Add(new SqlParameter("@storageTableName", _backupTableName));
backupCommand.Parameters.Add(new SqlParameter("@dataSource", _config.dataSource));
backupCommand.Parameters.Add(new SqlParameter("@dbName", _config.database));
backupCommand.Parameters.Add(new SqlParameter("@tableName", tableConfig.name));

At runtime, everything with an "@" before it gets replaced by the variables in the parameters. Very nice. It looks all proper. But when running it, I'd constantly get this:

System.Data.SqlClient.SqlException was caught
Message=Must declare the table variable "@storageTableName".
Source=.Net SqlClient Data Provider
ErrorCode=-2146232060
Class=16

Joy. Lots of Googling and head scratching lead me to this forum.

In a nutshell, although parameters are an excellent way of using variables in SQL code, they can't be used for table names. So in the, I had to rewrite my initial string query as:
backupQuery = "SELECT tableData FROM " + _backupTableName + " WHERE dataSource = @dataSource AND dbName = @dbName AND tableName = @tableName";
I'm sure there's a good reason for this that I don't know about, but it's kind of annoying.

Tuesday, May 3, 2011

Firefox 4. Save tabs.

I finally got around to upgrading to Firefox 4, a bit late, I know. And after upgrade, my tabs disappeared. In fact, they disappeared each time after I closed the browser. Looking through the program options, it seems there was no where to enable this feature which was stock standard in Firefox 3.

Luckily, I found this website: http://support.mozilla.com/en-US/questions/800462

If you're too lazy to read it, it's in the about:config dialog. You just type that into the address box.

Monday, March 21, 2011

Time from an SQL server

My latest application does time based queries from a Microsoft SQL Server. After half a day of debugging, I found out that the SQL Server (which I don't have more than just read access to) and my computer had different system times, causing my queries to be invalid.

Luckily, there's a way to query a database and return the time.
SELECT getdate() AS time
I put that into my datareader before any query, and bam! There's the time of the SQL server returned as a single tuple in a column named time. I'm not sure on the semantics of it, but in C# it was easily cast to a DateTime object using Convert.ToDateTime().

Updated because now I use MySQL. And the equal call is:
SELECT NOW()

System.Threading.SynchronizationLockException was caught

I was coding away some parallel threads today in C# when bam! This very vague exception kept getting thrown over and over again.

I had a block of code similar to this:
while (!Monitor.TryEnter(_removals)) ;
foreach (AlarmMessage alm in _removals)
{
RemoveAlarm(alm);
}
_removals = new List();
Monitor.Exit(_removals);

It looked all fine in my head. I lock the object (it was a List<>), I iterated through it and run a "remove" function on it, reset the list and then unlock the object.

However, on the Monitor.Exit I kept getting the System.Threading.SynchronizationLockException. Grrrrrrr.

I figured it out though. Obviously creating the new List<>() creates a brand new object. The lock was set with a reference on the original object, so when it comes to Monitor.Exit(), it's trying to unlock from the new reference. Woops. So instead of making a new List when I want to clear it, I just go the proper way and
_removals.Clear().

Friday, March 18, 2011

Server Clusters

One of my projects requires a database to be shared over multiple servers with failover properties, so I began investigating clustering of SQL databases. Microsoft SQL does it of course, but for our little project it would’ve been a $6000 license cost. Sod that. So I thought I’d try out the ever reliable, open-source/free databases.

Disclaimer: these are my own discoveries. For all I know, I went about this the complete wrong way. I’m just dumping it all here for future reference or in case anyone else had similar issues. I found during the course of this experiment that a lot of this stuff wasn’t very well documented and in hard to find places. So this is just me putting it all together. My implementations are tailored for my setup only, and I know that it is outside of the usage specs of some of these cluster definitions.

MySQL:

MySQL has a clustering version (I used ver. 7). It uses a system where there are multiple data nodes, a single management server and single or multiple SQL servers. At a minimum, they recommend 4 servers: 1 management, 2 data, 1 server. My setup was only ever for 2 servers, so I split it evenly down the middle: Each server had a manager, a data node and a server so that they could run independently on failover.

I set it up using a hybrid of these two amazing tutorials:

http://downloads.mysql.com/tutorials/cluster/GetMySQLClusterRunning-Windows.pdf

http://planet.mysql.com/entry?id=20198

In getting things to work like the Linux example, I had to change one thing. Instead of having ndbcluster=true in the .conf file, I changed it to just ndbcluster (it being there meant it was true). My two config files were as follows:

Config.ini:

[ndbd default]
noofreplicas=2

[ndbd]
hostname=host1
id=1

[ndbd]
hostname=host2
id=2

[ndb_mgmd]
id = 101
hostname=host1

[ndb_mgmd]
id = 102
hostname=host2

[mysqld]
id=51
hostname=host1

[mysqld]
id=52
hostname=host2

my.cnf:

[mysqld]
ndb-nodeid=52
ndbcluster
datadir=c:\\mysql\my_cluster\mysqld_data
basedir=c:\\mysql\mysqlc
port=5001
server-id=52
log-bin=host2-bin
This worked amazingly when everything was running in console mode. Occasionally there was an error (Error 2003: Can’t connect to MySQL server on ‘) when accessing the SQL database from another computer, but the following fixed that by setting up permissions for the computer logging in. For production databases though, you should definitely use proper usernames/passwords/permissions.

1. Run mysql –u root –P
2.
Run GRANT ALL ON *.* To ‘’@’’;

Excellent.

To get it working properly though, I needed it to run on my two servers as Windows Services.

To do that you simply run each application (ndbd, ndb_mgm, mysqld) with the –install flag. Passing in the command lines is a bit harder, as it has to be moved to a file called my.ini located in the base directory of MySQL or in the Windows folder. Each application is labeled under its executable header (aka, ndbd would be [ndbd]) and the full variable names are listed underneath (-- not -). One server (host2) that I used is as follows:

My.ini:

[ndb_mgmd]
config-file=c:/MySQL/my_cluster/conf/config.ini
configdir=c:/MySQL/my_cluster/conf

[nbdb]
ndb-connectstring=host2:1186

[mysqld]
ndb-nodeid=52
ndbcluster
datadir=c:\\mysql\my_cluster\mysqld_data
basedir=c:\\mysql\mysqlc
port=5001
server-id=52
ndb-connectstring=localhost:1186
log-bin=host2-bin
ndb-extra-logging

The management and the data nodes ran perfectly as servers. However, I hit a snag with the database. The service would start, stop and display the following message:

Could not start the MySQL service on local computer error 1067: the process terminated unexpectedly.

Well, I tried everything. When I ran the service in console mode (from the command line with –console) it ran perfectly. I could not find a solution to save my skin. Much Googling has shown that this is a common problem with MySQL running on Windows. And there’s not really a fix for it. I tried it on XP SP2, XP SP3, Server 2008 and Server 2008 R2 with no joy. It’s really a bugger, as in console mode it worked perfectly for me. I’m sure in Linux/Unix/whatever it’ll be a great solution if you need a simple server.

Just a note, when creating a table in MySQL that is clustered, don’t forget to set engine=ndb or engine=ndbcluster in the CREATE statement.

So then I moved on to PostgreSQL.

PostgreSQL:

I installed the latest version on one PC and set up a dummy database. I then set up a version on the other PC.

To get replication I followed the steps from :

http://www.postgresql.org/docs/9.0/interactive/warm-standby.html#STREAMING-REPLICATION

but with a few differences I stole from:

http://brandonkonkle.com/blog/2010/oct/20/postgres-9-streaming-replication-and-django-balanc/

Instead of making a “backup” using pg_dump or PGAdminIII I copied across the entire Postgres SQL folder (in my case, the 9.0).

Side note: when using backup in PGAdminIII for a custom database, it can give you a “database not found” error. (pg_dump: [archiver (db)] connection to database ""ClusterDB"" failed: FATAL: database ""ClusterDB"" does not exist) To get around this, copy the command line text from PGAdminIII message output to your own command line, and edit it so the last command, the name of the database, is surrounded by quotations only, not /”. For some reason, it adds the string literal escape character for “.

I tried starting the service through the windows service panel at this point, but it kept locking on start-up. To kill it, look for the applications pg_ctl.exe and postgres.exe in your Windows Task Manager. Killing these will stop the service from starting, especially if it loops (which it does when incorrectly starting). If your server still won’t start, don’t forget to check both the Windows event logs (application) and the log file in the data/pg_log folder.

So after copying across the file structure (make sure you backup the old one first, you’ll need a few things from it) I had to set the folder security attributes to allow the same user (in my case it was ./postgres) to make changes to the folder (right click the folder>properties>security>edit). This allows the server to change any postmaster lock files. You may still get some errors about postmaster.pid being unable to be created, so it’s best just to delete the postmaster.pid file from the data folder.

I found I kept getting the error “FATAL: no pg_hba.conf entry for host "::1", user "postgres", database "postgres", SSL off”. To combat this, I just went back to my backup of the original installed database and copied across the three .conf files again (pg_hba.conf, pg_ident.conf, postgresql.conf).

At this point, my server was running. After a few hours and a loss of hair.

Getting it to run properly, well, it was a bit of a bitch. I kept getting:

LOG: streaming replication successfully connected to primary
FATAL: the database system is starting up

To combat this, I put both servers in Hot_standby (instead of archive) mode and redid the entire process again.

So it works. However, Postgres is not a proper cluster, it’s more of a replication. Any update to the master flowed through to the slave server flawlessly. It did use quite a bit of network space though with its log files. On the event of a failure you’d just automatically connect to the standby server, but coming back from it would require manual work and database copying of any new updates. A lot of effort, when all I want is a cluster!

Microsoft SQL Server it is then.

...or is it?

It turns out Microsoft SQL Server 2008 clusters requires the underlying servers to be part of a Microsoft Failover Cluster. Microsoft Failover Clusters can be configured in Windows Server 2008 Enterprise and Datacenter editions. Unfortunately, all of my servers are running Standard.

So the original idea is out of the window, unless I also pay to upgrade my Windows Servers. Back to square 1 with too much money. I had a sleep on it and came back the next day with my solution (as of now).

I liked the MySQL cluster database. It’s the one I wanted to use. It just wouldn’t run as a service! So, I made it into a service. Simple. I fired up C# and made a simple service that checks the process list for MySQL (it’s process name is “mysqld”). If it’s running, it goes to sleep. If it’s not running, it launches the program as a background process in Windows.

Here’s the core of the service, which I make run in its own thread:

while (_running) // Run until told to stop
{
// get the list of all currently running processes
Process[] proList = Process.GetProcesses();
// see if our application is in the currently running process list
Process query = (from clsProcess in proList
where clsProcess.ProcessName == "mysqld"
select clsProcess).FirstOrDefault();
if (query != null)
{
// it's running, tell the user
Console.WriteLine("MySQL is running.");
}
else
{
// it's not running
Process pro = new Process();
Console.WriteLine("MySQL not running. Attempting to start.");
// point the process starter to where the file lives
pro.StartInfo.FileName = "c:\\mysql\\mysqlc\\bin\\mysqld.exe";
// add the arguments
pro.StartInfo.Arguments = "--console";
// hide the console window of the application
pro.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
pro.StartInfo.UseShellExecute = false;
pro.StartInfo.RedirectStandardOutput = true;
pro.StartInfo.RedirectStandardError = true;
pro.OutputDataReceived += new DataReceivedEventHandler(pro_OutputDataReceived);
// start the process
try
{
pro.Start();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
// Take a nap.
Thread.Sleep(20000);
}

private void pro_OutputDataReceived(object sender, DataReceivedEventArgs e)
{
Console.WriteLine(e.Data);
}

It operates as such:

1. Queries the process list for MySQL (which is known in the process world as “mysqld”)

2. If not found in the list it launches the process

a. You need the location of the executable

b. I passed it the argument “–console” to force it to run. In trial and error I found that if you start mysqld with no arguments, it liked to time out rather quickly.

c. I then set the WindowStyle to hidden so a console window isn’t displayed on the server

d. It’s optional, but I use the OutputDataReceived even to catch all of the output in order to log it.

3. Sleeps.

Pretty simple application but it does the job. When running, the database appears in the task manager (“mysqld”) and it works perfectly. If I manually kill the database, it restarts within 20 seconds.

To make it act like a proper service though, I needed to kill the database if the underlying service was shutdown. So in the OnStop() function of the service I called the following function after killing the main loop thread:

public void Shutdown()
{
if (_running)
{
Process[] proList = Process.GetProcesses();
// it's one we've launched that is hidden
Process query = (from clsProcess in proList
where clsProcess.ProcessName == "mysqld"
select clsProcess).FirstOrDefault();
if (query != null)
{
// it's running, kill it
Console.WriteLine("mysqld is running in hidden mode. Killing.");
query.Kill();
}
}
}

It just loops through the running process list and if MySQL is found, it kills it.

With this service running MySQL in a sneaky way, everything works. Despite the service launching a non-service program, it still manages to work on start up and with no users logged in. I have found that the console output from MySQL won’t log properly when run as a service though. I’ll work through that later, but for now I’m happy that my clustered database is working!

Monday, December 20, 2010

C# form topmost in application scope only

I was creating a little "Find" dialog box for my recent treeview application. I wanted the box to be like in Visual Studio itself, in the fact that the find box is topmost for anything in Visual Studio, but not over the rest of the programs that are running.

In the C# form designer, you can set the form to have "Topmost = true", which places the form on top of every window in the entire Windows scope. Not quite what I wanted...

To get it to be topmost, you need to call the Show() method when displaying the form, passing to it a reference to the parent form that it will always be on top of. Like so:

FindForm frm = new FindForm();

frm.Show(this);


(FindForm is my search dialog, and this is the parent form calling it)

Now it's topmost in the application scope but not in the Windows scope!

Wednesday, December 15, 2010

C# GUI Hangs on Invoke

If you've ever tried doing any advanced work using GUIs in C#, you'll be friends/enemies with the good old fashioned Cross-thread operation not valid exceptions. GUI controls run on their own thread and any other thread can't access them.

The great work around is to invoke the control. For instance, I use the following method to update a richTextBox control from any thread I happen to be in:

public delegate void StringParameterDelegate(string value);

public void UpdateRichTextBoxStatus(string value)

{

if (InvokeRequired)

{

// We're not in the UI thread, so we need to call Invoke

Invoke(new StringParameterDelegate(UpdateRichTextBoxStatus), new object[] { value });

return;

}

// Must be on the UI thread if we've got this far

richTextBoxStatus.Text = value + richTextBoxStatus.Text;

}



It checks to see if the control needs invoking (to avoid the cross-thread exception). If it does, it will reursively call itself again using a delegate and then update the richTextBox.

This worked fine and dandy most of the time. However, I had one condition where different invocations were happening at the same time, on different controls, but being invoked from an external, managed DLL. What happened is the invoke occured, but then the whole program hung on a call to the DLL.

Further investigation (god bless the parallel stacks threading debugging view) showed that one of the invocations went into a sleep state, waiting for the other invocation to finish. The Invoke call itself is synchronous, so the next invoke call was waiting for the original to finish. And hence the program just hung there. Waiting... waiting... never finishing.

Well there's an easy workaround for this. Instead of calling Invoke, call BeginInvoke! This makes the call asynchronous. The function thus becomes:

public delegate void StringParameterDelegate(string value);

public void UpdateRichTextBoxStatus(string value)

{

if (InvokeRequired)

{

// We're not in the UI thread, so we need to call BeginInvoke

BeginInvoke(new StringParameterDelegate(UpdateRichTextBoxStatus), new object[] { value });

return;

}

// Must be on the UI thread if we've got this far

richTextBoxStatus.Text = value + richTextBoxStatus.Text;

}