Wednesday, June 22, 2005

ASP.NET Apps within SharePoint

While designing a digital asset management system for one of my clients, I considered running the ASP.NET app within SharePoint by simply instructing SharePoint to exclude the path. This is a pretty good idea, if the app can live within the confines SharePoint imposes. Authentication & Session, for example, can't be controlled at the this level.

If your app needs to impersonate, the capabilities are limited. Setting impersonation in the identity element causes the Windows Identity to be IUSR_. Leaving impersonation off causes the Windows Identity to be that of the App Pool (NETWORK SERVICE in my case). In either case, the Page and Thread Principals are empty since the Directory Security setting in IIS allows anonymous. Changing the Dir Sec setting does as you'd expect -- the Page and Thread Principals are set appropriately, impersonation is set, etc. But, now the entire top-level SharePoint site requires authorization (no anonymous access).

ASP.NET Impersonation and Principals

Every now and then I write a simple app to remind myself of what the principals are for the various impersonate config options. Maybe in the future I'll remember to look here.

Assumptions:
  • VRoot requires authentication (anonymous disabled)
  • VRoot's App Pool identity using NETWORK SERVICE
  • "IEUser" is the end user
  • "ImpersonatedUser" is the user config'd in the identity element
















ScenarioPage
User
Thread
CurrentPrincipal
WindowsIdentity
impersonate=falseIEUserIEUserNETWORK SERVICE
impersonate=true;
userName not set
IEUserIEUserIEUser
impersonate=true;
userName set
IEUserIEUserImpersonatedUser

So, the identity of System.Security.Principal.WindowsIdentity is the only one that changes. Page.User should typically be used for IsInRole checks.

Tuesday, March 08, 2005

SharePoint & WSS_Medium

As I mentioned in an earlier post, I've been trying to determine the overhead involved with updating data via SharePoint WebParts vs. ASP.NET WebForms. To get started, I created a simple WebPart to render the request's query params to a HTML table. So, the first problem I ran into is that a WebParts do not have access to the params collection due to CAS. When I exec'd the page, an error regarding "LinkDemand for AspNetHostingPermission failed" (or similar). So, my WebPart didn't have this permission. I added it to GAC, but elevating it to Full trust probably isn't a good idea. Thanks to several blog posts (Debugging Web Parts - a full explanation of the requirements in particular) I changed the system.web/trust config from level='WSS_Minimal' to 'WSS_Medium' This brought my WebPart back to life, and I believe I'm operating in a least required security mode now.

Applications in SharePoint

I'm architecting a solution for a customer which would benefit from several features in SharePoint 2003 (Tasks, Alerts, doc repository, etc.) The solution will need several of these concepts, and I hope SharePoint's features will be a good match. However, I'm concerned that data entry pages will be complicated by the fact that SharePoint uses WebParts in zones on the page. This is similar to other frameworks I've used, yet so far I have not been able to determine the amount of additional overhead required to do data entry in SP.

WebParts are very similar to Server Controls in ASP.NET -- code in an assembly writes HTML to the output stream. No big deal from the output standpoint, but you don't get as much of the input benefits from ASP.NET controls (DataGrid is a good example).

Big Gaps

Wow! It's amazing how long the "dry spells" are between my blogging activities. Too busy!

Thursday, October 14, 2004

XSLT, XPath and GUID's

Ahh, GUIDs. We love 'em; we hate 'em. Actually I usually love them, but querying for GUIDs in XML is fraught with peril. Consider a simple case of seaching for a Person node by the value of an attribute:

<xsl:template match="//Person[@PersonID='3'] >
...
</xsl:template>

But what if the unique identifier for Person nodes is a GUID?

<xsl:template match="//Person[@PersonID='
{4C22F4FA-0C4A-4FCF-85DF-F9B7A902244E}'] >
...
</xsl:template>

  • What GUID format is used in the XML?
    • Bracket notation?
    • Hyphenated?
    • Mixtures?
  • With which casing are the alphabetic portion of the hex values stored in the XML?
    • Upper case?
    • Lower case?
    • Mixture?

As you can see from just these two items, the matrix of problems expands rapidly. In the particular case I am dealing with, I am able to control format (bracketed, hyphenated), but not case. I have had to assume that case will be either upper or lower, but that mixed case will not occur (a fairly reasonable assumption since the GUIDs are not manually edited; for code to render mixed case it has to do extra work). So, my XSLT file finds the appropriate node by this method:

<xsl:param name='FindThisGuid'>
<xsl:variable name='UCaseGuid' select='translate($FindThisGuid, "abcdef", "ABCDEF")'>
<xsl:variable name='LCaseGuid' select='translate($FindThisGuid, "ABCDEF", "abcdef")'>
<xsl:template match='Person[@PersonID = $UCaseGuid) Person[@PersonID = ($LCaseGuid)]>
...
</xsl:template>

Now the Good News: XSLT 2.0 will hopefully resolve this issue by promoting GUID to a first class citizen (actual type).


Thursday, September 16, 2004

IE Gotcha! Do Not Use Self-Closing Script Tag

I recently had the (ahem) pleasure of helping a colleague debug a JavaScript problem. The issue was the he had multiple JavaScript blocks on a page like this:

<script type='text/JavaScript' src='uitools.js' />
<script type='text/JavaScript'>
function DoSomething() {
//...
}
</script>

Seems harmless enough, right? For the longest time we could not determine why DoSomething was not accessible to page elements. We tried all kinds of things until we stumbled upon a surprising resolution. We began to drill in on the issue when I moved the DoSomething function into the upper script block (which required breaking open the self-closing script tag). Suddenly, page elements could use the function successfully. After we moved the function back to its rightful home, everything still worked. Then I realized that the only difference was that the upper script block was no longer self-closing. When I converted it back to self-closing, sure enough the page stopped working again.

For some reason IE (and possibly other browsers) does not handle self-closing script tags in an XML compliant manner. Go figure!

BP: 15 Minutes Isn't Enough

One of the pitfalls of the internet era is that we easily forget that communication takes time. Today's case in point has to do with product build announcements. On my current project, the dev team's "practice" is to send email announcing, "We're going to sync & build in X minutes. Don't check in anything until we send email announcing a successful build." (Formerly X was 5 minutes; recently it has trebled to a whopping 15. The fact that the dev team builds on a random schedule is poor practice in the first place.)

The person who does these builds seems to think that everyone receives the email instantaneously and that they will act on the information immediately. As is predictable, however, the builder frequently sends email along these lines, "Everybody stop checking in! I haven't been able to build." or "We will not have a build today because too many checkins occurred after the cut-off."

The problem is that people are not immediately in tune with email in some Borg-like fashion. We need consistent, dependable builds; but we're trying to deliver randomly.

Monday, September 13, 2004

How much will break?

I don't claim to be an expert in javascript, but I continue to find myself working with it. As I was working on a separate issue, I saw that the language attribute of <script> has been depricated as of HTML 4.01. Ok. I can switch to type='text/javascript' easily enough. The issue that got my attention however, is that XHTML 1.0 does not support the language attrib. So, tons of existing javascript in HTML pages will be rejected in the XHMTL world. That will create a significant barrier to adoption for XHTML.


Wednesday, September 08, 2004

Best Practices

My current project is very random and undisciplined. There are no good Product or Program Managers, the dev team is almost out of control. The situation has driven me to think about several BPs that would help.


.NET Related

Ensure all XSDs (XML Schemas) are DataSet compatible.

This doesn't mean that XSDs should be generated by a DataSet (DS) nor is msdata:IsDataSet='true' attribution required. The intent of this practice is to make XML data readily available to .NET UI components (e.g., DataGrid). If the schema is compatible, then you can create a DataSet, initialize it with the XSD, and then load a conformant XML data. Viola! Now just hook the DataSet up to the DataGrid (or other UI control) and bind.

Note also that you can create a typed DataSet from the XSD and load XML directly into it. The typed DataSet gives you a bit more capability (e.g., dataSet.People["Name"] where People is a DataTable and Name is a column in the DT)

Source Control

Even very small teams (2 people) should use a good source control mechanism. The cost of source control systems (on a per seat basis) pales in comparison to the risk of devs losing code to oversight (accidental deletion or overwrite), disk failure, time loss due to out-of-sync code, etc. It amazes me that so many teams try to "survive" without source control.

Check In Frequently

Once source control is in place, all developers should check in frequently -- at least daily. Code should be checked in after some levels of validation (it compiles, generally works, etc.), but it also needs to be acceptable to check in code that is distinctly work-in-progress. A good rule of thumb I use is: The greater the upstream dependencies, the higher the check in standard. Checking in changes to a thread pool, for example, must always compile and work very well (unless it is early enough in the project that there are no dependencies yet).

Branch Code

Branching code involves penalties, but it is your friend. My current project is losing time and is risking code losses because they don't want to branch. A major demo is coming up, so most developers are coding for days on end without checking in. Creating a dead-end branch and taking targeted changes would allow devs to continue unhindered in the mainline. Another mechanism is to have a dedicated build machine for a specific purpose. In our current case, for example, just using the demo machine to build the code for itself frees the restrictions on the mainline. Of course you reduce opportunities to refine deployment procedures, but everything has its cons.


Monday, June 21, 2004

Crystal Reports' -- Auto-gen'd code files

After designing a Crystal report file (.rpt), a corresponding code file (.cs in my case) will be generated. Typically, if your report is "Inventory.rpt" then the code file will be "Inventory.cs" Sometimes, however, you will get code file pollution (due to VSS issues, copying between machines, etc.) Crystal's engine will auto-gen another file ("Inventory1.cs" in this case) and the original code file will simply be abandoned. Here's the best way I've found to remedy this problem and reset all the code files to the original names:
  • Delete all code files associated with report files (in this case, delete Inventory*.cs)
  • In Visual Studio, R-click each report file and select the "Run Custom Tool" context menu item
  • Rebuild your solution
Pretty simple.

Wednesday, June 16, 2004

Setting Crystal Report's Database Info at Run-Time

In previous posts I have discussed issues regarding Crystal Reports programming in ASP.NET applications. My most recent problem was, "How can I change a report's data source at run-time?" At design-time, you can use the CR designer's "Set Location" option, but this is impractical because:
  • Modifying even just a few reports with this mechanism is time consuming
  • Changing a report's data source, although infrequent, is not a one-time event
On my current project, I need to change the data source on reports for these reasons:
  • Each developer has different test databases to run the reports against
  • Moving the ASP.NET app through Development, Testing, and Production phases requires using different data stores for the reports
These are pretty standard circumstances for any dev team. Fortunately, it was very easy to leverage my EnterpriseDataStore class and set the appropriate information on the report.

SqlConnection conn = EnterpriseDataStore.GetConnection(<DataStore Name>);
CrystalDecisions.Shared.ConnectionInfo connInfo = new ConnectionInfo();
connInfo.DatabaseName = conn.Database;
connInfo.ServerName = conn.DataSource;
// Leave UserID & Password alone if report was designed for Trusted Security (NT Integrated)
connInfo.UserID = <user name>;
connInfo.Password = <user password>;
foreach(CrystalDecisions.CrystalReports.Engine.Table tbl in rpt.Database.Tables) {
    CrystalDecisions.Shared.TableLogOnInfo logOnInfo = tbl.LogOnInfo;
    logOnInfo.ConnectionInfo = connInfo;
    tbl.ApplyLogOnInfo(logOnInfo);
}

The combination is powerful. Now I can change the connection string info as part of the app config and have the reports point to the right data store dynamically. Each developer and app environment (Test, Production) has the config setting in machine.config or web.config, so the code can move seamlessly among all environments.

Saturday, June 12, 2004

Crystal Reports: Navigation in ASP.NET

I created a general purpose ASP.NET page for rendering all of my Crystal Reports. This aspx page has a simple toolbar for setting a date range for the report and selections for output type (HTML, Acrobat (PDF), Excel (XLS)). Some time after developing this page I realized that navigating through a multi-page report was not working -- every time I would navigate (using Crystal's toolbar), I would end up on page 1.
To resolve this problem, I only had to make slight modifications: detect the navigation event & skip a bit of code during navigation.

On the page's PreRender event:

  • Load the Report Document (.rpt)
  • If not navigating (use CRV's Navigate event & set flag>...
    • Set the date range info on the Report Doc
  • Set CRV.ReportSource to the Report Doc
  • And, of course, call base.OnPreRender to give CRV it's opportunity to do prerender processing

The primary issue of the viewer always rendering page 1 was caused by setting the date info again.


Tuesday, June 01, 2004

Stored Procedures for DAAB's SqlHelper.UpdateDataset

The Microsoft Data Access Application Block (DAAB) provides a great way to keep .NET DataSet's synchronized with a SQL Server database without tons of code. However, there are a few gotchas, etc. along the way. One of the hurdles is how to persist a new DataSet row (DataRow) to the backend. In particular, if you are using an identity column as the primary key, you'll run into a few problems:
  • You can't add a row to the DataTable with a null value for the identity column (since you should have set the DataTable's PrimaryKey to the identity column)
  • Using a flag value (e.g., -1) for the new row's identity column works fine until you try to add more than one row. (An exception occurs announcing a primary key violation due to the 2nd new row's identity column being -1)
  • Trying to manually update the persisted new row's identity column in the DataSet, although a good first guess, is not a good way to go

The big trick for inserts is to structure the stored procedure (sproc) to update the data, and then return the entire row for the updated item. (Just returning the new identity value doesn't suffice). Consider a simple table: Phonelist (EntryId, Name, PhoneNumber). This table contains three columns:

  1. EntryId -- an auto-generated identity column for uniqueness.
  2. Name -- a person's name
  3. PhoneNumber -- the person's phone number

An appropriate sproc for adding new rows to this table would look like this:

create stored procedure Phonelist
   (@name varchar(50), @phoneNumber varchar(15))
as
   insert into Phonelist (Name, PhoneNumber)
      values (@name, @phoneNumber)

select EntryId, Name, PhoneNumber
   from Phonelist
      where EntryId = @@IDENTITY
return

Notice that each of the columns used to originally populate the DataTable in the DataSet are selected and returned filled with the data just inserted (including the auto-generated identity column). Once you set up the correct SqlCommands for each of the three types of operations (insert, delete, update), then just call SQLHelper.UpdateDataset. The underlying DataSet will auto-magically populate the identity column for the new row based on the row returned from the sproc. Very nice!

One last point: Don't forget to grant Execute permission to the appropriate user account (i.e., \ASPNET).