Monday, January 14, 2013
Internal Temperature on a SCADAPack 334E
There's an internal thermometer on the Schneider SCADAPack 334 E-Series RTUs. I spent ages trying to find out how to access it and read the temperature on the RTU.
And then, hidden in a release note, I found it. It is an internal analogue DNP3 point with a point number of 50062.
Random, but it worked!
And then, hidden in a release note, I found it. It is an internal analogue DNP3 point with a point number of 50062.
Random, but it worked!
Labels:
334e,
DNP3,
E-Series,
IEC 61131-3,
ISaGRAF,
RTU,
SCADAPack,
temperature
Friday, December 21, 2012
XML (de)Serialization - A list of a base object, containing a mix of derived objects.
So here's the problem. I've got an XML file containing a list of basic shapes I need to draw in my application. I've broken the shapes down to different classes, but stuck them in a single XML list.
Here's an example XML:
Just being lazy and using [XmlElement] on the Shape list in a C# class did not work, so I had to go deeper. First, let's have a look at my objects. I defined my own Point class, instead of using System.Drawing.Point, just so they could be represented as attributes in my XML (a design decision).
So we define a public enumeration of the different object types:
for my collections. However, List does not work with the XmlChoiceIdentifier. This Microsoft bug report (http://connect.microsoft.com/VisualStudio/feedback/details/681487/xmlserializer-consider-that-an-element-adorned-with-xmlchoiceidentifier-could-be-an-ienumerable-or-an-icollection-but-code-generation-fail) shows that by design, it needs to be an array. So, let's change it to arrays. And hey presto, it works!
Final class definitions below!
Here's an example XML:
<Device>Each device has a shape which is a list of different drawing objects. The simple way to do this would be to have a list for each object type (Line, Ellipse, Triangle) but that's not what I wanted. The order of the XML is also the order of drawing on the screen, so I wanted these to remain in a single list as a grouping of objects, derived from a simple object class.
<Shape>
<Line Colour="blue">
<Point x="29" y="55"/>
<Point x="43" y="55"/>
</Line>
<Ellipse Colour="yellow">
<Point x="44" y="50"/>
<Point x="53" y="59"/>
</Ellipse>
<Triangle Colour="red">
<Point x="1456" y="191"/>
<Point x="1456" y="201"/>
<Point x="1465" y="201"/>
</Triangle>
</Shape>
</Device>
Just being lazy and using [XmlElement] on the Shape list in a C# class did not work, so I had to go deeper. First, let's have a look at my objects. I defined my own Point class, instead of using System.Drawing.Point, just so they could be represented as attributes in my XML (a design decision).
public sealed class PointI then created a base drawing object, with a colour and a list of points. Because the size of the Point array changes based on each derived object, the XML Serialiser ignores the Point array in the base class.
{
[XmlAttribute]
public int x
{
get;
set;
}
[XmlAttribute]
public int y
{
get;
set;
}
}
public class DrawingObjectNow, I derive each specific object from this base class. To set the size of the Point array, I use a private field and then modify the base array to become a getter, using the 'new' keyword. The XmlElement is defined in these derived classes for the Serialiser (and yes, I realise the Ellipse is the same as a Line, but there's other code I removed for this example. It still serves the point of showing different derived classes).
{
[XmlAttribute]
public string Colour
{
get;
set;
}
[XmlIgnore]
public Point[] Points;
}
public sealed class Line : DrawingObjectVery good. Now, let's make a list of the base object and force the XML Serialiser to add the different element names (Line, Triangle, Ellipse) to the single list. This is when we hit our first slightly different XML definition. To get this to work, .NET makes us add an enumeration which is ignored by the XML. The XML Serialiser then uses this to help detect what object type it is (http://msdn.microsoft.com/en-us/library/system.xml.serialization.xmlchoiceidentifierattribute%28v=vs.100%29.aspx).
{
private Point[] _points = new Point[2];
[XmlElement("Point")]
new public Point[] Points
{
get
{
return _points;
}
set
{
_points = value;
}
}
}
public sealed class Triangle : DrawingObject
{
private Point[] _points = new Point[3];
[XmlElement("Point")]
new public Point[] Points
{
get
{
return _points;
}
set
{
_points = value;
}
}
}
public sealed class Ellipse : DrawingObject
{
private Point[] _points = new Point[2];
[XmlElement("Point")]
new public Point[] Points
{
get
{
return _points;
}
set
{
_points = value;
}
}
}
So we define a public enumeration of the different object types:
[XmlType(IncludeInSchema = false)]Then in our serializing Shape class we add an array of this enumeration, so it can be matched with the list being serialised. But we get the Serialiser to ignore it.
public enum ShapeChoiceType
{
Line,
Triangle,
Ellipse
}
// Do not serialize this next field:Finally we add the List! We have to use the XmlChoiceIdentifier, pointing to our List of ItemTypes, to help cast the objects. In our XmlElement definition, we specify the name of each object type, as well as what the C# type will be.
[XmlIgnore]
public ListItemType;
[XmlElement("Line", typeof(Line))]This builds all fine! But the first time you try to deserialise the XML in the application, we get an error! Oh dear. With XML, the CLR tends to compile the XML classes at run-time.
[XmlElement("Triangle", typeof(Triangle))]
[XmlElement("Ellipse", typeof(Ellipse))]
[XmlChoiceIdentifier("ItemType")]
public ListDrawingObjects
{
get;
set;
}
System.InvalidOperationException was caughtSo, what does this mean? For reasons I'm not going into, I use List
Message=Unable to generate a temporary class (result=1).
error CS1061: 'System.Collections.Generic.List' does not contain a definition for 'Length' and no extension method 'Length' accepting a first argument of type 'System.Collections.Generic.List ' could be found (are you missing a using directive or an assembly reference?)
Final class definitions below!
public sealed class Point
{
[XmlAttribute]
public int x
{
get;
set;
}
[XmlAttribute]
public int y
{
get;
set;
}
}
public class DrawingObject
{
[XmlAttribute]
public string Colour
{
get;
set;
}
[XmlIgnore]
public Point[] Points;
}
public sealed class Line : DrawingObject
{
private Point[] _points = new Point[2];
[XmlElement("Point")]
new public Point[] Points
{
get
{
return _points;
}
set
{
_points = value;
}
}
}
public sealed class Triangle : DrawingObject
{
private Point[] _points = new Point[3];
[XmlElement("Point")]
new public Point[] Points
{
get
{
return _points;
}
set
{
_points = value;
}
}
}
public sealed class Ellipse : DrawingObject
{
private Point[] _points = new Point[2];
[XmlElement("Point")]
new public Point[] Points
{
get
{
return _points;
}
set
{
_points = value;
}
}
}
[XmlType(IncludeInSchema = false)]
public enum ShapeChoiceType
{
Line,
Triangle,
Ellipse
}
public sealed class Shape
{
[XmlElement("Line", typeof(Line))]
[XmlElement("Triangle", typeof(Triangle))]
[XmlElement("Ellipse", typeof(Ellipse))]
[XmlChoiceIdentifier("ItemType")]
public DrawingObject[] DrawingObjects
{
get;
set;
}
// Do not serialize this next field:
[XmlIgnore]
public ShapeChoiceType[] ItemType;
}
public sealed class Device
{
[XmlElement("Shape")]
public ListShapes
{
get;
set;
}
}
Tuesday, July 10, 2012
QNAP MySQL issues
I was having some serious issues with connecting to a MySQL database running on a QNAP NAS. The programs I wrote were holding up whenever they connected. In further debugging it could be seen that the connection would open successfully, but take up to a minute, pausing all other threads in the program. This change only happened a day ago, where two things happened: I added it to a domain and our intranet DNS servers were changed.
So I played around with the domain for a while, but nope. That didn't make a difference. Obviously not, that’s just for file sharing. The DNS setting in QNAP was set correctly, so it couldn't have been the DNS, right? Well after a few hours of frustration and database reinitializing, I googled harder.
And got this: http://stackoverflow.com/questions/1292856/why-connect-to-mysql-is-so-slow. The MySQL DNS doesn't bother with the settings in QNAP. So all I had to do was change the MySQL configuration my.cnf file on the QNAP file system. Which I had no idea how to access.
Luckily there was another Google result that helped: http://forum.qnap.com/viewtopic.php?p=124900
So I downloaded PuTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html), SSH'ed into the NAS, navigated to /etc/config and then ran VI on my.cnf. Of course, it'd been a decade + since I used VI and it's interesting key combinations. A university had it all written out for me (http://www.washington.edu/computing/unix/vi.html). I picked an abritrary line at the start of the my.cnf file, added
So I played around with the domain for a while, but nope. That didn't make a difference. Obviously not, that’s just for file sharing. The DNS setting in QNAP was set correctly, so it couldn't have been the DNS, right? Well after a few hours of frustration and database reinitializing, I googled harder.
And got this: http://stackoverflow.com/questions/1292856/why-connect-to-mysql-is-so-slow. The MySQL DNS doesn't bother with the settings in QNAP. So all I had to do was change the MySQL configuration my.cnf file on the QNAP file system. Which I had no idea how to access.
Luckily there was another Google result that helped: http://forum.qnap.com/viewtopic.php?p=124900
So I downloaded PuTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html), SSH'ed into the NAS, navigated to /etc/config and then ran VI on my.cnf. Of course, it'd been a decade + since I used VI and it's interesting key combinations. A university had it all written out for me (http://www.washington.edu/computing/unix/vi.html). I picked an abritrary line at the start of the my.cnf file, added
skip-name-resolvesaved it, restarted the server and hey presto! Connections were almost instantaneous again.
Thursday, October 13, 2011
SCADAPack E-Series
I've been developing some remote field RTU software for the Schneider/Control Microsystems SCADAPack 334E. This is a nifty little device that let's you program in about 6 different styles using the IEC 61131-3 protocol in a software package called ISaGRAF. I've been sticking to function blocks because it's neat and new for me and easy on the eye. There's a USB host port on the front, so I thought it'd be easy to program the device to log data to a USB key on it, right? Especially considering there's an example on the internet for it.
Well, I was wrong.
I was trying to get this example to work on my SCADAPack: http://resourcecenter. controlmicrosystems.com/ display/public/SoftwareTools/ Data+Log+Using+FBD+Language.
I originally built the code on ISaGRAF for E-Series 7.83 but every time
I went to debug the Data Log Using FBD Language program it would upload
the code to 100% and then I would get the following error messages:
But I went to debug via the configurator, and I came across the same debugging errors:
So I asked what I could do with the USB host port. And I was told
Apart from this though, the SCADAPacks are great little devices for field units!
Well, I was wrong.
I was trying to get this example to work on my SCADAPack: http://resourcecenter.
Further digging showed me that the dlog and
dlogcnfg weren’t installed at all in ISaGRAF. The function blocks were
there in the code I downloaded, but they had no back end to them. They
weren’t in the Library at all.
I
then went to the DVD that came with USB licence key and installed the
software on there. This one said “IEC61131-3 Programming language suite
for SCADAPack controllers”. Previously I was using a downloaded ISaGRAF
from their webpage because I started developing in demo mode. Opening the
DLog project, the dlog and dlogcnfg function blocks were there. But
then when I went to debug the program, there was no option for
“Configurator” in the debug Communication port link parameters!
I
had a bit of a head scratch, then tried importing the libraries from
the DVD version of ISaGRAF to the downloaded E-series ISaGRAF. Using the
“Libraries” program of the E-Series version, I restored from archive
all of the dlog associated C function blocks from C:\ISAWIN\LIB\SOURCE\
(the DVD version’s location). Going back into the programs in the
downloaded E-Series version, the dlog and dlogcnfg function blocks were
there!
Frustrated, I gave a quick Google which lead to this webpage: http://resourcecenter. controlmicrosystems.com/ display/public/SoftwareTools/ ISAGRAF+Function+block+not+ implemented. I had a look at Error #66, which says:
A program is using a C function block, which is unknown in the target. Your workbench library may not correspond to your target version.At this point, after hitting my head against the keyboard repeatedly, I decided to get a hold of tech support. Where I was able to find out:
The problem here, in a nutshell, is that the E-Series RTUs do not support the dlog functionality.Cool, well I can do logging, but it's not software controlled. It's on a timer as trend data. Not as flexible as I would have liked, and it does not save on the USB.
This functionality is native to the SCADAPack controllers, but the E-Series, which are born from entirely different firmware, do not have an implementation of dlog.
The E-Series controllers DO however have some data logging capabilities.
If you open the E-Series Configurator Reference Manual, take a look here: E-Series Technical Reference Manuals > SCADAPack E-Series Trend Sampler Technical Reference
So I asked what I could do with the USB host port. And I was told
Unfortunately the USB Host port is not currently supported by the E-Series operating system.Excellent. Not in use at all. Despite being ranted and raved over in the documentation and advertising.
It may be supported in the future, but I’m not sure when.
Apart from this though, the SCADAPacks are great little devices for field units!
Monday, September 5, 2011
SQLite
I was experimenting using SQLite in C#, which is surprisingly easy. This blog post details all you need, with a Hans Moleman football to the groin reference and a Jackie Chan photoshop. What more do you need to learn SQLite?!
http://www.mikeduncan.com/sqlite-on-dotnet-in-3-mins/
http://www.mikeduncan.com/sqlite-on-dotnet-in-3-mins/
Wednesday, August 31, 2011
Massive C# link dump
The other day I wrote a small C# GUI test app to analyse the speed and writing abilities of different data storage methods for sharing between different processes and computers. The idea being that two almost isolated devices (except for one open port for file sharing on a NAS) can share information between each other. This meant no messaging queues and no database servers.
My initial investigation was comparing writing to a shared XML file and a shared Access file (this is now being expanded to include SQLlite). It needs to be a file that can easily be removed, backed up and still accessed by both devices at the same time. In the process of doing this, I ended up Googling for about 10 things I do constantly in C# but never remember. This blog post is now going to be the mighty link dump of them all for future reference, and why they were good.
First off, I had to generate mass amounts of data quickly to flood the shared file from both devices. I used the good old random number generator, which for some reason I can never commit to memory. This website has the function I use in almost every project that requires random (http://www.c-sharpcorner.com/UploadFile/mahesh/RandomNumber11232005010428AM/RandomNumber.aspx).
I normally commit my application settings to a custom XML file or the Windows Registry. I thought with this application I would be trickier (so I can just copy it across or share it through the NAS) and use Visual Studio's in built App.Config settings (http://www.codeproject.com/KB/cs/SystemConfiguration.aspx). I had never used this before, but I was shocked at how easy and versatile it is!
I created a test class for the randomly generated data. My first full test was to see how the system held up writing mass I/O to a shared XML file. Serializing the file to XML is easy, but most of my work puts it to a binary array for sending via sockets or other communications devices. Saving (as well as reading) to an actual XML file is a bit more work, but easy thanks to this website (http://codesamplez.com/programming/serialize-deserialize-c-sharp-objects).
Now that the application was reading and writing simultaneously, there were of course issues with file locks due to StreamReader and StreamWriter. Lucky, there's a work around for StreamReader locks (http://bytes.com/topic/c-sharp/answers/510916-streamreader-avoiding-ioexception-due-external-lock).
That worked and I got some good test data, even if the results were exactly as I predicted them to be (this will be another post when all my tests are complete).
The next test was doing the same thing, but storing it in Microsoft Access 2007 tables instead of XML. I did a lot of research into Access (it had been a while since I used it) and found lots of details and limitations of it (http://databases.aspfaq.com/database/what-are-the-limitations-of-ms-access.html).
Then I had to connect to it. Luckily there's a website which details pretty much every connection string you'll ever need for any database operations (http://www.connectionstrings.com/access-2007).
Databases have different time fields than .Net defaults. Whenever writing data to a DateTime field in a database I generally manually format the data in a custom ToString() call. Here's a website which details all you need to know about formatting .Net DateTime objects in whatever style you so fancy (http://www.csharp-examples.net/string-format-datetime/).
Finally, bulk MS Access read/writes/deletes cause the file to bloat. It won't shrink back down unless you compact it. This is generally done in the Access software, sometimes on file close, but in a programmatic environment it never happens automatically. So you've got to do it all yourself in code (http://techieyogi.blogspot.com/2009/11/how-to-compact-access-2007-database.html).
My initial investigation was comparing writing to a shared XML file and a shared Access file (this is now being expanded to include SQLlite). It needs to be a file that can easily be removed, backed up and still accessed by both devices at the same time. In the process of doing this, I ended up Googling for about 10 things I do constantly in C# but never remember. This blog post is now going to be the mighty link dump of them all for future reference, and why they were good.
First off, I had to generate mass amounts of data quickly to flood the shared file from both devices. I used the good old random number generator, which for some reason I can never commit to memory. This website has the function I use in almost every project that requires random (http://www.c-sharpcorner.com/UploadFile/mahesh/RandomNumber11232005010428AM/RandomNumber.aspx).
I normally commit my application settings to a custom XML file or the Windows Registry. I thought with this application I would be trickier (so I can just copy it across or share it through the NAS) and use Visual Studio's in built App.Config settings (http://www.codeproject.com/KB/cs/SystemConfiguration.aspx). I had never used this before, but I was shocked at how easy and versatile it is!
I created a test class for the randomly generated data. My first full test was to see how the system held up writing mass I/O to a shared XML file. Serializing the file to XML is easy, but most of my work puts it to a binary array for sending via sockets or other communications devices. Saving (as well as reading) to an actual XML file is a bit more work, but easy thanks to this website (http://codesamplez.com/programming/serialize-deserialize-c-sharp-objects).
Now that the application was reading and writing simultaneously, there were of course issues with file locks due to StreamReader and StreamWriter. Lucky, there's a work around for StreamReader locks (http://bytes.com/topic/c-sharp/answers/510916-streamreader-avoiding-ioexception-due-external-lock).
That worked and I got some good test data, even if the results were exactly as I predicted them to be (this will be another post when all my tests are complete).
The next test was doing the same thing, but storing it in Microsoft Access 2007 tables instead of XML. I did a lot of research into Access (it had been a while since I used it) and found lots of details and limitations of it (http://databases.aspfaq.com/database/what-are-the-limitations-of-ms-access.html).
Then I had to connect to it. Luckily there's a website which details pretty much every connection string you'll ever need for any database operations (http://www.connectionstrings.com/access-2007).
Databases have different time fields than .Net defaults. Whenever writing data to a DateTime field in a database I generally manually format the data in a custom ToString() call. Here's a website which details all you need to know about formatting .Net DateTime objects in whatever style you so fancy (http://www.csharp-examples.net/string-format-datetime/).
Finally, bulk MS Access read/writes/deletes cause the file to bloat. It won't shrink back down unless you compact it. This is generally done in the Access software, sometimes on file close, but in a programmatic environment it never happens automatically. So you've got to do it all yourself in code (http://techieyogi.blogspot.com/2009/11/how-to-compact-access-2007-database.html).
Wednesday, July 27, 2011
C#: System.Data.SqlClient.SqlException : Must declare the table variable
So now I'm doing more and more in C# I'm trying to do it properly. As in not just taking my old habits and writing it in C# code, but using all sorts of C# proper style. Since a lot of my work is with databases right now, this means I'm experimenting a lot with LINQ and proper SQL queries.
I used to build SQL queries as pure strings, just passing in the variables directly as string concatenations. This is unsafe. Hugely so. Imagine if someone jimmied your input code? They could put whatever they want into that query. This isn't too big a problem for me, as my code runs as a background service hidden on a computer behind about 30,000 firewalls that's probably ever only going to be accessed by me and my boss.
But still, the proper way to do it is to set up a query in advance and pass in parameters, which is basically an abstraction layer of variables in the SQL query. Observe:
In a nutshell, although parameters are an excellent way of using variables in SQL code, they can't be used for table names. So in the, I had to rewrite my initial string query as:
I used to build SQL queries as pure strings, just passing in the variables directly as string concatenations. This is unsafe. Hugely so. Imagine if someone jimmied your input code? They could put whatever they want into that query. This isn't too big a problem for me, as my code runs as a background service hidden on a computer behind about 30,000 firewalls that's probably ever only going to be accessed by me and my boss.
But still, the proper way to do it is to set up a query in advance and pass in parameters, which is basically an abstraction layer of variables in the SQL query. Observe:
At runtime, everything with an "@" before it gets replaced by the variables in the parameters. Very nice. It looks all proper. But when running it, I'd constantly get this:backupQuery = "SELECT tableData FROM @storageTableName WHERE dataSource = @dataSource AND dbName = @dbName AND tableName = @tableName";
backupCommand = new SqlCommand(backupQuery, backupConnection);
backupCommand.Parameters.Add(new SqlParameter("@storageTableName", _backupTableName));
backupCommand.Parameters.Add(new SqlParameter("@dataSource", _config.dataSource));
backupCommand.Parameters.Add(new SqlParameter("@dbName", _config.database));
backupCommand.Parameters.Add(new SqlParameter("@tableName", tableConfig.name));
Joy. Lots of Googling and head scratching lead me to this forum.System.Data.SqlClient.
SqlException was caught
Message=Must declare the table variable "@storageTableName".
Source=.Net SqlClient Data Provider
ErrorCode=-2146232060
Class=16
In a nutshell, although parameters are an excellent way of using variables in SQL code, they can't be used for table names. So in the, I had to rewrite my initial string query as:
backupQuery = "SELECT tableData FROM " + _backupTableName + " WHERE dataSource = @dataSource AND dbName = @dbName AND tableName = @tableName";I'm sure there's a good reason for this that I don't know about, but it's kind of annoying.
Subscribe to:
Posts (Atom)