Thursday, October 14, 2004

XSLT, XPath and GUID's

Ahh, GUIDs. We love 'em; we hate 'em. Actually I usually love them, but querying for GUIDs in XML is fraught with peril. Consider a simple case of seaching for a Person node by the value of an attribute:

<xsl:template match="//Person[@PersonID='3'] >
...
</xsl:template>

But what if the unique identifier for Person nodes is a GUID?

<xsl:template match="//Person[@PersonID='
{4C22F4FA-0C4A-4FCF-85DF-F9B7A902244E}'] >
...
</xsl:template>

  • What GUID format is used in the XML?
    • Bracket notation?
    • Hyphenated?
    • Mixtures?
  • With which casing are the alphabetic portion of the hex values stored in the XML?
    • Upper case?
    • Lower case?
    • Mixture?

As you can see from just these two items, the matrix of problems expands rapidly. In the particular case I am dealing with, I am able to control format (bracketed, hyphenated), but not case. I have had to assume that case will be either upper or lower, but that mixed case will not occur (a fairly reasonable assumption since the GUIDs are not manually edited; for code to render mixed case it has to do extra work). So, my XSLT file finds the appropriate node by this method:

<xsl:param name='FindThisGuid'>
<xsl:variable name='UCaseGuid' select='translate($FindThisGuid, "abcdef", "ABCDEF")'>
<xsl:variable name='LCaseGuid' select='translate($FindThisGuid, "ABCDEF", "abcdef")'>
<xsl:template match='Person[@PersonID = $UCaseGuid) Person[@PersonID = ($LCaseGuid)]>
...
</xsl:template>

Now the Good News: XSLT 2.0 will hopefully resolve this issue by promoting GUID to a first class citizen (actual type).


Thursday, September 16, 2004

IE Gotcha! Do Not Use Self-Closing Script Tag

I recently had the (ahem) pleasure of helping a colleague debug a JavaScript problem. The issue was the he had multiple JavaScript blocks on a page like this:

<script type='text/JavaScript' src='uitools.js' />
<script type='text/JavaScript'>
function DoSomething() {
//...
}
</script>

Seems harmless enough, right? For the longest time we could not determine why DoSomething was not accessible to page elements. We tried all kinds of things until we stumbled upon a surprising resolution. We began to drill in on the issue when I moved the DoSomething function into the upper script block (which required breaking open the self-closing script tag). Suddenly, page elements could use the function successfully. After we moved the function back to its rightful home, everything still worked. Then I realized that the only difference was that the upper script block was no longer self-closing. When I converted it back to self-closing, sure enough the page stopped working again.

For some reason IE (and possibly other browsers) does not handle self-closing script tags in an XML compliant manner. Go figure!

BP: 15 Minutes Isn't Enough

One of the pitfalls of the internet era is that we easily forget that communication takes time. Today's case in point has to do with product build announcements. On my current project, the dev team's "practice" is to send email announcing, "We're going to sync & build in X minutes. Don't check in anything until we send email announcing a successful build." (Formerly X was 5 minutes; recently it has trebled to a whopping 15. The fact that the dev team builds on a random schedule is poor practice in the first place.)

The person who does these builds seems to think that everyone receives the email instantaneously and that they will act on the information immediately. As is predictable, however, the builder frequently sends email along these lines, "Everybody stop checking in! I haven't been able to build." or "We will not have a build today because too many checkins occurred after the cut-off."

The problem is that people are not immediately in tune with email in some Borg-like fashion. We need consistent, dependable builds; but we're trying to deliver randomly.

Monday, September 13, 2004

How much will break?

I don't claim to be an expert in javascript, but I continue to find myself working with it. As I was working on a separate issue, I saw that the language attribute of <script> has been depricated as of HTML 4.01. Ok. I can switch to type='text/javascript' easily enough. The issue that got my attention however, is that XHTML 1.0 does not support the language attrib. So, tons of existing javascript in HTML pages will be rejected in the XHMTL world. That will create a significant barrier to adoption for XHTML.


Wednesday, September 08, 2004

Best Practices

My current project is very random and undisciplined. There are no good Product or Program Managers, the dev team is almost out of control. The situation has driven me to think about several BPs that would help.


.NET Related

Ensure all XSDs (XML Schemas) are DataSet compatible.

This doesn't mean that XSDs should be generated by a DataSet (DS) nor is msdata:IsDataSet='true' attribution required. The intent of this practice is to make XML data readily available to .NET UI components (e.g., DataGrid). If the schema is compatible, then you can create a DataSet, initialize it with the XSD, and then load a conformant XML data. Viola! Now just hook the DataSet up to the DataGrid (or other UI control) and bind.

Note also that you can create a typed DataSet from the XSD and load XML directly into it. The typed DataSet gives you a bit more capability (e.g., dataSet.People["Name"] where People is a DataTable and Name is a column in the DT)

Source Control

Even very small teams (2 people) should use a good source control mechanism. The cost of source control systems (on a per seat basis) pales in comparison to the risk of devs losing code to oversight (accidental deletion or overwrite), disk failure, time loss due to out-of-sync code, etc. It amazes me that so many teams try to "survive" without source control.

Check In Frequently

Once source control is in place, all developers should check in frequently -- at least daily. Code should be checked in after some levels of validation (it compiles, generally works, etc.), but it also needs to be acceptable to check in code that is distinctly work-in-progress. A good rule of thumb I use is: The greater the upstream dependencies, the higher the check in standard. Checking in changes to a thread pool, for example, must always compile and work very well (unless it is early enough in the project that there are no dependencies yet).

Branch Code

Branching code involves penalties, but it is your friend. My current project is losing time and is risking code losses because they don't want to branch. A major demo is coming up, so most developers are coding for days on end without checking in. Creating a dead-end branch and taking targeted changes would allow devs to continue unhindered in the mainline. Another mechanism is to have a dedicated build machine for a specific purpose. In our current case, for example, just using the demo machine to build the code for itself frees the restrictions on the mainline. Of course you reduce opportunities to refine deployment procedures, but everything has its cons.


Monday, June 21, 2004

Crystal Reports' -- Auto-gen'd code files

After designing a Crystal report file (.rpt), a corresponding code file (.cs in my case) will be generated. Typically, if your report is "Inventory.rpt" then the code file will be "Inventory.cs" Sometimes, however, you will get code file pollution (due to VSS issues, copying between machines, etc.) Crystal's engine will auto-gen another file ("Inventory1.cs" in this case) and the original code file will simply be abandoned. Here's the best way I've found to remedy this problem and reset all the code files to the original names:
  • Delete all code files associated with report files (in this case, delete Inventory*.cs)
  • In Visual Studio, R-click each report file and select the "Run Custom Tool" context menu item
  • Rebuild your solution
Pretty simple.

Wednesday, June 16, 2004

Setting Crystal Report's Database Info at Run-Time

In previous posts I have discussed issues regarding Crystal Reports programming in ASP.NET applications. My most recent problem was, "How can I change a report's data source at run-time?" At design-time, you can use the CR designer's "Set Location" option, but this is impractical because:
  • Modifying even just a few reports with this mechanism is time consuming
  • Changing a report's data source, although infrequent, is not a one-time event
On my current project, I need to change the data source on reports for these reasons:
  • Each developer has different test databases to run the reports against
  • Moving the ASP.NET app through Development, Testing, and Production phases requires using different data stores for the reports
These are pretty standard circumstances for any dev team. Fortunately, it was very easy to leverage my EnterpriseDataStore class and set the appropriate information on the report.

SqlConnection conn = EnterpriseDataStore.GetConnection(<DataStore Name>);
CrystalDecisions.Shared.ConnectionInfo connInfo = new ConnectionInfo();
connInfo.DatabaseName = conn.Database;
connInfo.ServerName = conn.DataSource;
// Leave UserID & Password alone if report was designed for Trusted Security (NT Integrated)
connInfo.UserID = <user name>;
connInfo.Password = <user password>;
foreach(CrystalDecisions.CrystalReports.Engine.Table tbl in rpt.Database.Tables) {
    CrystalDecisions.Shared.TableLogOnInfo logOnInfo = tbl.LogOnInfo;
    logOnInfo.ConnectionInfo = connInfo;
    tbl.ApplyLogOnInfo(logOnInfo);
}

The combination is powerful. Now I can change the connection string info as part of the app config and have the reports point to the right data store dynamically. Each developer and app environment (Test, Production) has the config setting in machine.config or web.config, so the code can move seamlessly among all environments.

Saturday, June 12, 2004

Crystal Reports: Navigation in ASP.NET

I created a general purpose ASP.NET page for rendering all of my Crystal Reports. This aspx page has a simple toolbar for setting a date range for the report and selections for output type (HTML, Acrobat (PDF), Excel (XLS)). Some time after developing this page I realized that navigating through a multi-page report was not working -- every time I would navigate (using Crystal's toolbar), I would end up on page 1.
To resolve this problem, I only had to make slight modifications: detect the navigation event & skip a bit of code during navigation.

On the page's PreRender event:

  • Load the Report Document (.rpt)
  • If not navigating (use CRV's Navigate event & set flag>...
    • Set the date range info on the Report Doc
  • Set CRV.ReportSource to the Report Doc
  • And, of course, call base.OnPreRender to give CRV it's opportunity to do prerender processing

The primary issue of the viewer always rendering page 1 was caused by setting the date info again.


Tuesday, June 01, 2004

Stored Procedures for DAAB's SqlHelper.UpdateDataset

The Microsoft Data Access Application Block (DAAB) provides a great way to keep .NET DataSet's synchronized with a SQL Server database without tons of code. However, there are a few gotchas, etc. along the way. One of the hurdles is how to persist a new DataSet row (DataRow) to the backend. In particular, if you are using an identity column as the primary key, you'll run into a few problems:
  • You can't add a row to the DataTable with a null value for the identity column (since you should have set the DataTable's PrimaryKey to the identity column)
  • Using a flag value (e.g., -1) for the new row's identity column works fine until you try to add more than one row. (An exception occurs announcing a primary key violation due to the 2nd new row's identity column being -1)
  • Trying to manually update the persisted new row's identity column in the DataSet, although a good first guess, is not a good way to go

The big trick for inserts is to structure the stored procedure (sproc) to update the data, and then return the entire row for the updated item. (Just returning the new identity value doesn't suffice). Consider a simple table: Phonelist (EntryId, Name, PhoneNumber). This table contains three columns:

  1. EntryId -- an auto-generated identity column for uniqueness.
  2. Name -- a person's name
  3. PhoneNumber -- the person's phone number

An appropriate sproc for adding new rows to this table would look like this:

create stored procedure Phonelist
   (@name varchar(50), @phoneNumber varchar(15))
as
   insert into Phonelist (Name, PhoneNumber)
      values (@name, @phoneNumber)

select EntryId, Name, PhoneNumber
   from Phonelist
      where EntryId = @@IDENTITY
return

Notice that each of the columns used to originally populate the DataTable in the DataSet are selected and returned filled with the data just inserted (including the auto-generated identity column). Once you set up the correct SqlCommands for each of the three types of operations (insert, delete, update), then just call SQLHelper.UpdateDataset. The underlying DataSet will auto-magically populate the identity column for the new row based on the row returned from the sproc. Very nice!

One last point: Don't forget to grant Execute permission to the appropriate user account (i.e., \ASPNET).

Wednesday, May 26, 2004

DataGrid Edits With Paging & Sorting

If you've used the ASP.NET DataGrid, then you've probably used its sorting capabilities. It's very easy: just declare the sort column name in the item template and set DataGrid.AllowSorting = true. Similarly, it's easy to turn auto-paging on by setting DataGrid.AllowPaging=true.

Things get sticky, however, when you start allowing edits within the DataGrid. After adding an EditItemTemplate to each TemplateColumn that you want to be editable, and setting the DataGrid.EditItemIndex, everything appears to be just fine. One problem is that paging during edit isn't a valuable capability for the risks (if a row on page 1 is in edit, and the user moves to page 2, what is the state of the edited row? Does the user expect that the edits have been persisted? What should happen when the user now want to edit an item on page 2?) Certainly there are cases in which paging during edits is valuable, but it generally is not worth the required coding, testing, etc., so it should be turned off during editing.

So, when entering edit mode, just set DataGrid.AllowPaging=false. Easy enough, right? Wrong! This introduces a very nasty bug. The bug occurs when the user attempts to edit a row on a page other than page 1. Your code merrily goes along setting the EditItemIndex to the value passed to your EditCommand (DataGridCommandEventArgs.Item.ItemIndex)
Your event handler would look something like this:


private void MyDataGrid_EditCommand(object source, DataGridCommandEventArgs e) {
    //...
    myDataGrid.EditItemIndex = e.Item.ItemIndex;
    myDataGrid.AllowSorting = false;
    myDataGrid.AllowPaging = false;
    BindGrid();
    //...
  }


The unfortunate effect, however, is that the item in edit will be the item at e.Item.ItemIndex of page 1! Oops!

The solution I use is to make the pagers invisible by:

    //...
    myDataGrid.PagerStyle.Visible = false;
    //...

Obviously, you need to turn visibility back on during the code for canceling or committing the edits.

As an aside, it is also important to disable sorting during edits. If EditItemIndex is 4, then the fifth item will be under edit regardless of the underlying data. So, when the underlying data changes position (as sorting would do), then the wrong item is editable. The easiest way to solve this is to disable sorting (see the AllowSorting property usage above). Otherwise, you'll have to update EditItemIndex after the sort. If paging is also involved, then you'll have even more headaches to deal with (e.g., the item under edit is on page 1; the user sorts; now the correct item to edit is on page 3. Your code has to get the user to the right grid page with the right item under edit. Not impossible, but....)


Tuesday, May 25, 2004

Better DataGrid insert method

After spending way too much time yesterday figuring out how to get a DataGrid footer's contents, this morning I found something that is much easier in the right situation.

For some reason Adding a New Record to a DataGrid from 4GuysFromRolla made its way into my Google search. Their solution is to use an Add button in the footer row, hook into the DataGrid's ItemCommand event, and redirect to an "insert into...." method when the command is the Add button.

In my current case, the Add button needs to be outside the DataGrid, so I can't try this solution now. However, I will be very interested to use it next time because it is a much, much easier mechanism for collecting the user's input in the footer controls. The ItemCommand event includes a DataGridCommandEventArgs parameter which provides easy access to the controls. For example:

   TextBox nameCtrl = e.Item.FindControl("nameTextBox")).
   string name = nameCtrl.Text;

Monday, May 24, 2004

Inserting data via a DataGrid footer row

(a.k.a.: How to retrieve footer row contents on postback)
(a.k.a.: DataGrid's Nasty Underbelly)

Adding a footer row to a DataGrid is a cake-walk. Getting to the footer's contents on postback is a whole different ball of wax!

Why should you want to get the contents? Because it provides a very elegant mechanism for inserting a new row into the DataGrid (and the DataSet behind it, etc.) Ok, so how can we make this work?

When debugging, you will find that DataGrid.Controls[0] is a DataGridTable. So you attempt to cast this in your code, but then you find that DataGridTable is not instantiable. What to do, what to do? If you can just get to the DataGridItem for the footer, then you can use FindControl to get each control in your footer. To cut to the chase, here is what you need to do:

  • Get a WebControls.Table object from the DataGrid
  • Use the Table to determine the index position of the footer row
  • Acquire the footer row as a DataGridItem
  • Use DataGridItem.FindControl to locate each control on the row, casting each appropriately

By way of an example:
   Table tbl = dataGrid.Controls[0];
   int idxFooter = tbl.Rows.Count - 1;
   DataGridItem gridItem = tbl.Rows[idxFooter] as DataGridItem;
   //Assuming a TextBox named UserName was defined in the footer template...
   TextBox name = gridItem.FindControl("UserName") as TextBox;

Now you can use the TextBox as usual! To insert the data to the back-end datastore (XML, RDBMS, etc.), I like to create a new row in the DataSet table and use a DataAdapter to update (using the InsertCommand in this case).

Wednesday, May 19, 2004

Crystal Reports -- Render vs. PreRender

Crystal (more specifically, Web.CrystalViewer) does some special stuff during its PreRender handler. When adding Parameters to the viewer, you need to do it before the control's PreRender. Otherwise, the rendered content will just be "Programming Error." I figured this out the hard way by creating a handler for Page.Render, and config'ing the Web.CrystalViewer within. After receiving the error message mentioned (and scratching my head for a while), I moved all the code to my own PreRender handler. By setting everything in PreRender, calling base.OnPreRender (to let the other controls handle PreRender, and not overriding Render, everything began to work.

The Love-Hate relationship continues...

Undoc'd exception on DateTime.Parse

The .NET Framework Class Library documentation for DateTime's Parse method identifies two exceptions that may occur: ArgumentNullException and FormatException. When using this method, however, I found that it also throws ArgumentOutOfRangeException for dates that have valid formats but are out of range. For example:

   DateTime.Parse( "1/1/20000" );    // oops! year = 20,000

will throw ArgumentOutOfRangeException.

Wednesday, May 12, 2004

Using Parameters With Crystal Reports

To use parameters with Crystal Reports, there seems to be two fundamental issues to deal with:
  • Create parameters on the report (.rpt)

  • Set parameters on the report viewer object before assigining the report.

For example, I have a report that needs to filter data based on a date range. So in the report I had to:
  1. Create the parameters, BeginDate & EndDate

  2. Using the Select Expert, apply the parameters to the selection criteria (this is actually very easy; once the parameters have been created, you can select them from the drop-downs in Select Expert).

  3. Drag and place the parameters onto the report. This step is not required, but I find that most reports need to represent how the underlying data was selected.

Now that the report has parameters, it must have some way of aquiring values for the parameters at run-time. This is the point at which I had the most trouble because I thought that the code should set the parameters on the report object at run-time. That doesn't work. It does work, however, if you add parameter values to the viewer object prior to handing the report to the viewer. The general method to follow is:
  1. Create one ParameterField object per parameter on the report.

  2. IMPORTANT: Set each ParameterField.ParameterFieldName property to the same name as the associated parameter in the report. If the names are not an exact match, then the report will not aquire the parameter value correctly.

  3. Add the each ParameterField to a ParameterFields object.

  4. Assign the ParameterFields object to the viewer's ParameterInfo property.

  5. Finally, assign the report to the viewer. (Actually the order of report and parameters assignment is not significant, but it is a little more logical)

Here's a simple C# code snippet from a web form's Page_Load event handler:


CrystalDecisions.Shared.ParameterField fld = new ParameterField();
CrystalDecisions.Shared.ParameterFields flds = new ParameterFields();

// Set begin date param
CrystalDecisions.Shared.ParameterDiscreteValue prmBeginDate =
new CrystalDecisions.Shared.ParameterDiscreteValue();
prmBeginDate.Value = new DateTime( 2004, 1, 2 );
// NOTE: the parameter name must match the report's parameter
// name exactly.
fld.ParameterFieldName = "BeginDate";
// Add the discrete param to the param field
fld.CurrentValues.Add( prmBeginDate );
// Add the param fld to the param fld collection
flds.Add( fld );

// Add the params to the viewer
// snippet info: crv is a class member declared as CrystalDecisions.Web.CrystalReportViewer
crv.ParameterFieldInfo = flds;

// Add the params to the viewer
crv.ParameterFieldInfo = flds;

// Now load the report
OverallEquipmentActivity2 rpt = new OverallEquipmentActivity2();
crv.ReportSource = rpt;


Just a Simple Development Log

The purpose of this blog to simple to help me remember all those little arcane settings, methods, etc. you have to deal with during software development. Frankly, the level of frustration generated by wrestling with Crystal Reports on my current project has driven me to create this blog.