Feed Icon  

Contact

  • Bryant Likes
  • Send mail to the author(s) E-mail
  • twitter
  • View Bryant Likes's profile on LinkedIn
  • del.icio.us
Get Microsoft Silverlight
by clicking "Install Microsoft Silverlight" you accept the
Silverlight license agreement

Hosting By

Hot Topics

Tags

Open Source Projects

Archives

Ads

Reporting Services WebParts Article

Posted in Reporting Services | SharePoint at Tuesday, April 27, 2004 3:25 AM Pacific Daylight Time

I've just posted what I hope will be the first of a series of articles on how to create SharePoint webparts for Reporting Services. This first article just gives some background information on how this can be done and creates a simple webpart that displays properties about a report.

Reporting Services WebParts - Part I

I would be interested in any feedback you have about the article since it is rather long. :)

Reporting Services WebParts - Part I

Posted in Reporting Services | SharePoint at Monday, April 26, 2004 4:26 AM Pacific Daylight Time

This article is the first in a series of articles that I hope to publish about creating some Reporting Services WebParts. These webparts will be used to display Reporting Services reports as SharePoint WebParts. The articles assume that you have a working knowledge of SharePoint, WebParts, Reporting Services, and JavaScript.

In this article on Reporting Services webparts I want to setup some of the background information that will be used in creating the different parts. One of the first challenges to using Reporting Services (RS) in a WebPart is that RS uses a web service to manipulate the reports. There are some authentication issues that you will face if you try to call these web services directly from your webpart's code. The specific issue you will hit is called the double-hop and it is caused by the way Windows Authentication works. Mainly, Windows Authentication does not actually send your credentials over the wire but instead does some challenge/response stuff to authenticate the user, all of which is way outside the scope of this article and my expertise. So the route that these articles will take will be to make the web service calls directly from the client using some client side script.

If you're not already familiar with web services, SOAP, and javascript, you may need to read up a little before this all makes sense. But even if you don't understand it you can probably still work your way through the article and get it to work.

So our first task is to figure out how to make web service calls via client side script. Fourtunately for us a really big company has already figured out how to do this. Microsoft uses client side script in a lot of their web parts in order to call the SharePoint web services. This serves as a good starting point for us and their code is pretty easy to modify to fit our needs. To take a look at their code you need to browse to the “[c:\]Program Files\Common Files\Microsoft Shared\web server extensions\60\TEMPLATE\LAYOUTS\1033“ directory on your SharePoint computer. In that directory you will need to look at two files: IE55UP.js and IE50UP.js. There are a few diferences between these and which one you use will depend on what browser your org is using. I will be using the IE55UP.js for these articles but feel free to post comments about any differences you find.

In the IE55UP.js file you will find a procedure called SPSoapRequestBuilder. This procedure is the one used to called the SharePoint (SP) web services. It is important to note the namespaces used in the soap call: http://microsoft.com/sharepoint/webpartpages. This is one of the main changes we will make in order to use this same procedure to call the RS web service. Below is the procedure that the RS webparts will be using:

function RSSoapRequestBuilder(functionName)
{
  var object = new Object();
  function AddParameter(parameterName, parameterValue)
  {
    var index = this.parameterNameList.length;
    this.parameterNameList[index] = parameterName;
    this.parameterValueList[index] = parameterValue;
  }
  function SendSOAPMessage(xmlhttp)
  {
    var funcName = this.functionName;
    var paramNames = this.parameterNameList;
    var paramValues = this.parameterValueList;
    xmlhttp.setRequestHeader("Content-Type", "text/xml; charset=utf-8");
    xmlhttp.setRequestHeader("SOAPAction",
      "http://schemas.microsoft.com/sqlserver/2003/12/reporting/reportingservices/" 
      + funcName);
    var soapData = '<?xml version="1.0" encoding="utf-8"?>' +
      '<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ' +
      'xmlns:xsd="http://www.w3.org/2001/XMLSchema" ' +
      'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">' +
      '<soap:Body>' +
      '<' + funcName +
      ' xmlns="http://schemas.microsoft.com/sqlserver/2003/12/reporting/reportingservices">';
    for(var i=0; i < paramNames.length; i++)
    {
      var soapParam = (typeof(paramValues[i]) == "string") ? XmlEncode(paramValues[i]) : paramValues[i];
      soapData += '<' + paramNames[i] + '>' + soapParam + '</' + paramNames[i] + '>';
    }
    soapData += '</' + funcName + '>' +
      '</soap:Body>' +
      '</soap:Envelope>'
    xmlhttp.Send(soapData);
    return xmlhttp;
  }
  object.functionName = functionName;
  object.parameterNameList =
new Array();
  object.parameterValueList =
new Array();
  object.AddParameter = AddParameter;
  object.SendSOAPMessage = SendSOAPMessage;
  return object;
}

Notice that we are using a different namespace: http://schemas.microsoft.com/sqlserver/2003/12/reporting/reportingservices/. This is the namespace for the RS web service. Using this function we can now create the javascript object that will be used to call the RS web service. You will also need the XmlEncode function which is just the String2XML function from the SP script but renamed so that we don't run into conflicts and the Xml check was added so that we could pass in complex parameter types.

function XmlEncode(Value)
{
  if (Value.substring(0,1) == '<') return Value;
  var XmlString = "";
  var re = /&/g;
  XmlString = Value.replace(re,"&amp;");
  re = /</g;
  XmlString = XmlString.replace(re,"&lt;");
  re = />/g;
  XmlString = XmlString.replace(re,"&gt;");
  re = /"/g;
  XmlString = XmlString.replace(re,"&quot;");
  re = /'/g;
  XmlString = XmlString.replace(re,"&apos;");
  return XmlString;
}

This function just encodes any characters that would cause our Xml Soap request to be invalid.

To get started on our Visual Studio project, you will need to first download and install the WebPart Project Template from the SharePoint MSDN website. Once you have that downloaded and installed you can run through the following steps:

1) Start Visual Studio and create a new WebPart Library (name it something useful). You can create either a C# or a VB project. My code will be in C#, but I can provide the VB code if you get stuck in the translation.

2) Delete the WebPart1.cs and WebPart1.dwp files from the project.

3) Create a new folder called Scripts and add a new Javascript file called rs.js. Copy/Paste the two functions (XmlEncode and RSSoapRequestBuilder) into this new file and save it.

Now that we have the script to call, we need to make it easy for our webparts to call it. So we will add a utility class called Globals (style borrowed from .Text) that will encapsulate this for us. Here is what my Globals class looks like. You can Copy/Paste this into a new class called Globals in a new folder called Utility.

using System;

namespace Bml.RsWebParts.Utility
{
  /// <summary>
  /// Utility class containing common functions
  /// </summary>
  public sealed class Globals
  {
    /// <summary>
    /// Static methods only, so use a private
    /// constructor
    /// </summary>
    private Globals()
    {}

    /// <summary>
    /// Property ResourceMap (String)
    /// </summary>
    public static string ResourceMap
    {
      get
      {
        return "/wpresources/Bml.RsWebParts/";
      }
    }
    public static string GetResourceScriptTags(string scriptFileName)
    {
      return string.Format("<script language='javascript' src='{0}{1}'></script>",
        ResourceMap,
        scriptFileName);
    }
  }
}

Now you will have to modify what the ResourceMap function returns based on two things: (1) what the name of your assembly is and (2) how you deploy your assembly. When you deploy your assembly the extra files you include with it (such as script files) will get deployed into a folder that has the same name as your assembly. If you deploy your webpart using the STSADM tool, then you can just use the “/wpresources/[Your Assembly Name]” format. If you get fancy and create an installer tool then you might need to tweak this to fit where you install your assembly to. For now just change it to “/wpresources/[Your Assembly Name]“.

The next step is to create your first webpart which is going to be a very simple example so that this article isn't too long. So in your VS project select Add New Item and then select Web Part. Name it RsReportInfo. Next delete everything inside the class body so that you have an empty class with no variables, methods, or properties.

The first thing we are going to add is three properties (two of which we will eliminate later) that we will use to setup this webpart. Below is the C# code that you need to add to your class.


#region
Properties


private string serverUrl = string.Empty;
private string reportPath = string.Empty;
private string propertyList = "Name|Description";

 


[Browsable(

 

true),
 Category("Reporting Services"),
 DefaultValue(""),
 WebPartStorage(Storage.Shared),
 FriendlyName("Server URL"),
 Description("Server url such as http://localhost/reportserver")]
public string ServerUrl
{
  get {return serverUrl;}
  set {serverUrl = value;}
}


[Browsable(

 

true),
 Category("Reporting Services"),
 DefaultValue(""),
 WebPartStorage(Storage.Shared),
 FriendlyName("Report Path"),
 Description("Report path such as /SampleReports/Product Line Sales")]
public string ReportPath
{
  get {return reportPath;}
  set {reportPath = value;}
}


[Browsable(
true),
 Category("Reporting Services"),
 DefaultValue(""),
 WebPartStorage(Storage.Shared),
 FriendlyName("Property List"),
 Description("List of properties to display such as Name|Description")]
public string PropertyList
{
  get {return propertyList;}
  set {propertyList = value;}
}


#endregion

 

These three properties will allow us to configure the webpart to display information about a particular report. The property list is a pipe separated list of properties that should be displayed. The properties that you can display for a report are found in the documentation here (if you have RS installed). Next we are going to add four overriden methods and one private method. Below are these methods along with a short description of what they do.

 

 
public override ToolPart[] GetToolParts()
{
  ToolPart[] toolparts =
new ToolPart[2];
  WebPartToolPart wptp =
new WebPartToolPart();
  CustomPropertyToolPart custom =
new CustomPropertyToolPart();
  toolparts[1] = wptp;
  toolparts[0] = custom;
  custom.Expand(0);
  return toolparts;
}

 

 

This method will put our custom properties on the top of the property list and will expand it. This section is really optional but it is worth adding since it makes configuring the webpart much easier.protected override void OnLoad(EventArgs e)
{
  Page.RegisterClientScriptBlock("rs", Globals.GetResourceScriptTags("rs.js"));
  base.OnLoad (e);
}

We register our client script block when the Load event fires. We use the name “rs“ since the rs.js file is shared by all the RS webparts. If another webpart has already loaded the rs.js script it won't be loaded again.

protected override void OnPreRender(EventArgs e)
{
  if (serverUrl.Length > 0 && reportPath.Length > 0)
  {
    Page.RegisterStartupScript("rs_info" +
base.Qualifier, GetStartupScript());
  }
  base.OnPreRender (e);
}

We register a startup script during the PreRender event. This will be important later when the ServerUrl and ReportPath properties can be set by other webparts. For now we are just creating our startup script using the properties set in the property window.

protected override void RenderWebPart(HtmlTextWriter output)
{
  if (serverUrl.Length > 0 && reportPath.Length > 0)
  {
    output.AddAttribute(HtmlTextWriterAttribute.Class, "ms-WPBody");
    output.AddAttribute(HtmlTextWriterAttribute.Id, "divInfo");
    output.AddAttribute(HtmlTextWriterAttribute.Style, "overflow:auto;");
    output.RenderBeginTag(HtmlTextWriterTag.Div);
    output.RenderEndTag();
// div
  }
}

Here is where our webpart is actually rendered. Our webpart is really just a div tag that will get populated by the client side script which we will create next.

private string GetStartupScript()
{
  StringBuilder sb =
new StringBuilder();
  string[] properties = propertyList.Split('|');
  sb.Append("<script language='javascript'>");
  sb.Append(Environment.NewLine);
  sb.AppendFormat("GetReportInfo('{0}', '{1}', '", serverUrl, reportPath);

  foreach(string prop in properties)
  {
    sb.AppendFormat("<Property><Name>{0}</Name></Property>", prop);
  }
  sb.Append("');");
// ends the GetReportInfo method call
  sb.Append("</script>");
  sb.Append(Environment.NewLine);
  return sb.ToString();
}

This client side script passes in a call to the GetReportInfo method. This method needs to be added to our rs.js file (and is shown below). The properties parameter of the GetProperties web method takes an array of Property objects. If you're wondering how I figured this out you can take a look at the RS WSDL file (if you don't know what WSDL is then you probably don't want to look at the file). In the WSDL file you can see the type of XML that needs to be passed for this parameter by looking at the schema.

Finally here is the GetReportInfo method that needs to be added to the rs.js file.

function GetReportInfo(serverUrl, reportPath, properties)
{
  var returnList = '';
  try
  {
    var xmlhttp = null;
    try
    {
      xmlhttp =
new ActiveXObject("Msxml2.XMLHTTP.4.0");
    }
    catch(e)
    {
      xmlhttp =
new ActiveXObject("Msxml2.XMLHTTP");
    }
    xmlhttp.open("POST", serverUrl + '/reportservice.asmx',
false);
    var soapBuilder = RSSoapRequestBuilder("GetProperties");
    soapBuilder.AddParameter("Item", reportPath);
    soapBuilder.AddParameter("Properties", properties);
    soapBuilder.SendSOAPMessage(xmlhttp);
    returnList = xmlhttp.responseText;
  }
  catch(e)
  {
    alert(e.description);
    return null;
  }

  try
  {
    var xmldom = new ActiveXObject("Msxml2.DOMDocument");
    xmldom.async =
false;
    xmldom.loadXML(returnList);
    //did we get a soap error?
    isSoapError = xmldom.selectSingleNode(".//faultstring");
    if(isSoapError)
    {
      alert(isSoapError.text);
      return false;
    }

    var nodeList = xmldom.selectNodes(".//Property");
    var output = '';

    for(var i = 0; i < nodeList.length; i++)
    {
       output += "<b>" +
        nodeList[i].selectSingleNode("Name").text +
        "</b>: " +
        nodeList[i].selectSingleNode("Value").text +
        "<br>";
    }

    if (output.length == 0) output = "No information available.";

    divInfo.innerHTML = output;

  }
  catch(e)
  {
    alert(e.description);
  }
  return false;
}

 

 

The only tasks that remain are to create the DWP file, modify the manifest, and to deploy the webpart. The DWP file is added by right clicking the project and selecting Add New Item. Select Web Part DWP for the item type and give it the same name as your class file. Below is the DWP for my web part (your file should look similar).

<?xml version="1.0" encoding="utf-8"?>
<WebPart xmlns="http://schemas.microsoft.com/WebPart/v2" >
  <Title>Rs Report Info</Title>
  <Description>Displays information about a report.</Description>
  <Assembly>Bml.RsWebParts</Assembly>
  <TypeName>Bml.RsWebParts.RsReportInfo</TypeName>
</WebPart>

 

The next step is to modify the manifest file to include all the information about your new webpart and its resources. You will need to add the rs.js file as a resource so that it will be deployed with the webpart. Below is my manifest file (and again your file should look similar).

<?xml version="1.0"?>
<WebPartManifest xmlns="http://schemas.microsoft.com/WebPart/v2/Manifest">
  <Assemblies>
    <Assembly FileName="Bml.RsWebParts.dll">
      <ClassResources>
        <ClassResource FileName="rs.js"/>
      </ClassResources>
      <SafeControls>
        <SafeControl
          Namespace="Bml.RsWebParts"
          TypeName="*"
        />
      </SafeControls>
    </Assembly>
  </Assemblies>
  <DwpFiles>
    <DwpFile FileName="RsReportInfo.dwp"/>
  </DwpFiles>
</
WebPartManifest>

Finally you will need to add the setup project to your existing project. Right click the solution and select add new project and choose CAB Project under Setup and Deployment Projects. Name it something useful since this will be the name you will see when using the STSADM tool. Next right click your setup project and select Add Project Outputs. Select primary output and content files (select both by holding down ctrl). Back in your webpart project right click rs.js and select properties. In the properties window change the Build Action to Content. Do the same thing for the manifest.xml and the DWP file. Build the project and as long as you don't have any build errors you're ready to deploy the project to a test server.

Copy the cab file from either the debug or release folder in the setup project's folder (using windows explorer) to your server. The next step will involve using the STSADM tool which is a very simple tool once you understand it. In order to make using this tool easier I add “c:\Program Files\Common Files\Microsoft Shared\web server extensions\60\BIN” to the path environment variable on the server (right click My Computer, select Propreties, Advanced Tab, Environment Variables, find the PATH variable and add the path above but put a semicolon between it and the last path in the variable). Once you've added that path you can just open a command window and type “stsadm -o addwppack -filename c:\path_to_your_cab_file\your_cab_file.cab” and your webpart should now be installed.

 

 

Here is a screenshot of the webpart in action. Clearly there is a lot more we could do with this, but for now this is a good start. Hopefully this has given you a good start on using RS and SP webparts together.

Article version 1.0
Updated: 4/27/2004

RSS Bandit to add Subscription Syncing

Posted in Sql and Xml | General at Wednesday, April 21, 2004 4:18 AM Pacific Daylight Time

Dare comments:

RSS Bandit [currently] gives you the option to download and upload your feed list from a file share, an FTP server or a dasBlog weblog. However this doesn't actually do much synchronization during the import phase, basically it just adds the feeds that don't currently exist in your aggregator. It doesn't synchronize the read/unread messages, remove deleted feeds or remember which items you've flagged for follow up. I am in the process of fixing this for the next release.

Wow! When I started using RSS readers I began with RSS Bandit. I was even part of the workspace for a bit but never actually did any work other than making some suggestions (which usually Dare had already thought about). But at some point I switched to SharpReader.

Now the RSS Bandit team has been busy adding new features and while some of the features like searching were tempting, they weren't enough to make me switch back. But subscription syncing, now that is a killer feature that will I would drop SharpReader for in a quick second. I basically quit reading blogs away from work since it was just too much trouble to filter through what I had already read. This sounds like a great solution to this problem.

Eric Gunnerson on Loops

Posted in General at Monday, April 19, 2004 2:23 AM Pacific Daylight Time

Eric Gunnerson on Loops:

So, I would choose the foreach version unless I needed the index.

In fact, I would advocate this position even if for loops are faster, to avoid the sin of premature optimization.

What was funny to me when I read this was that I've always prefered the foreach version because of readability, but I always assumed it was the slowest. I actually assumed option #3 was the fastest. Glad to hear that my favorite method is the preferred one.

Update: Joshua posts how option #3 is actually the fastest and concludes that:

There are two “rules” I would like to propose:

  • Don't be too clever.  Writing loop 3 just because you think it will be faster is too clever.  Avoiding loop 3 just because “the JIT can't optimize it” is also trying to fool the compiler.  Just write the code that does what you want.
  • Measure perf specific to your scenario.  Measure it yourself.

Update(2): Seems like this topic is a hot one. Kevin Ransom asks To foreach or not to foreach and concludes:

[T]here is no performance benefit to be gained by replacing foreach(...) with for(int i; ...).

 

 

WebParts with Multiple Connections

Posted in Reporting Services | SharePoint at Friday, April 16, 2004 3:12 AM Pacific Daylight Time

As I mentioned here, one of the projects I've done at Countrywide was build a group of WebParts for Reporting Services (I'm still looking into releasing these at some point). Recently I added an Export Report webpart that allows the user to export the report shown in the Report Viewer webpart. It worked fine when I developed it but it seemed there was a problem when it was deployed. The problem was that the URL it was getting was empty. I figured out a solution and I thought it might of use to other webpart developers.

Here is a screen shot of some of the webparts in action.

The RS Folder View uses some client side script to call the Reporting Services webservice from the clients machine. It then fills itself with all the reports in the RS folder (the folder and server are specified as properties). When the user clicks on a specific report it sends the server URL and the report path to the RS Report Viewer as a row. It also sends the same row to the RS Report Parameters webpart which also hits the RS webservice to get the list of parameters and values for the selected report. When a user selects parameters they also get sent as a row to the RS Report Viewer. The RS Report Viewer takes both the rows and generates a URL which it loads in an iframe.

The RS Report Export webpart is the last piece of the puzzle. It takes the URL from RS Report Viewer and uses that to allow the user to export the report to the selected format. The problem I was having was that this value was getting sent to the export webpart before the other values were received so the URL was incorrect. The code in the RS Report Viewer that would send the value was triggered as follows:

 

 

 

Public Overrides Sub

PartCommunicationMain()

  OnCellReady()

 

 

End

Sub

So I knew that I needed to fire the OnCellReady not just when the parts communicated, but also the values were received. What I didn't know, but figured out, was that you can call this method as many times as you want. So this made it a lot easier since I didn't need to figure out the perfect time to call this method but instead just added another call here:

 

 

 

 

 

 

 

 

 

 

 

 

Public

 

Sub RowReady(ByVal sender As Object,
  ByVal rowReadyArgs As RowReadyEventArgs) _
  Implements

IRowConsumer.RowReady

 

 

  If Not IsNothing(rowReadyArgs.Rows) AndAlso _
     rowReadyArgs.Rows.Length > 0

Then

 

 

    If rowReadyArgs.SelectionStatus = "Report"

Then

 

 

      '' The first row has the data needed

 

 

      Dim dr As

DataRow = rowReadyArgs.Rows(0)

      _serverUrl = dr("ServerUrl")

      _reportPath = dr("ReportPath")

 

 

    ElseIf rowReadyArgs.SelectionStatus = "Parameters"

Then

 

 

      Dim dr() As

DataRow = rowReadyArgs.Rows

 

 

      Dim i As

Int32

 

 

      For i = 1 To

dr.Length

        _parameters.Add(dr(i - 1)("Name"), dr(i - 1)("Value"))

 

 

      Next

 

 

    End

If

    OnCellReady()

 

 

  End

If

 

 

End

Sub

So this meant that everytime I got new information I would pass it back out to the export webpart. This is very useful when you're developing parts that all need to work together like these do.

Michael Rys Clarifies Xml in Yukon

Posted in Sql and Xml at Friday, April 16, 2004 12:56 AM Pacific Daylight Time

Michael Rys clarifies some old comments by Euan Garden about the Xml datatype in Yukon.

If you store the XML instance in an XML datatype, we will use an XML Reader that will parse the XML and store it in an internal binary format that - among other things - is efficiently mappable into an internal relational form that most queries execute over. That relational form can be generated by the primary XML index to avoid the runtime generation during queries and updates (see below). The XQuery and update expressions are being translated into internal algebra operations that work against the internal relational form.

I think this confirms some of my ideas about how XML will be stored in Yukon. SQL Server's strength is relational data, so it makes sense to make use of that strength when it comes to XML. It almost sounds like they are basically doing the OpenXML for you instead of forcing you to do it. To me this seems like a good thing since you will get the performance of SQL Server plus the flexibility of XML. Of course you will pay some overhead costs for this added flexibility, but in many cases it will be more than worth it.

Programmers also have the ability to define additional secondary indices on the primary XML index to index values, names, paths etc. The optimizer will use them to answer queries more efficiently as appropriate.

This is going to be very cool. XML indexes on paths (I'm assuming that means XPaths). So you can store your XML document and index it, does this mean you can have a foreign key in your XML document column?

Euan: Supporting XQuery now allows us a much richer query language for insert, update and select operations. And we've also implemented this in a way that allows us to mix and match relational and XML data....

Michael: On this I have no additional comment :-)

If you haven't started to read up on XQuery, you might want to seriously consider it. Combining XML and relational data via XQuery sounds like a very cool feature, but you will need to know XQuery to use it. I'm still waiting for the post office to deliever my copy of XQuery from the Experts which I've been told is the book to get (I won't say who told me that but I will say they probably have a biased opinion :)).

Update: Dare comments:

Very interesting stuff, the notion of an XML datatype baked into the SQL Server engine. Truly the rise of the ROX [Relational-Object-XML] database has begun.

 

Channel 9 and MSDN

Posted in General at Friday, April 16, 2004 12:10 AM Pacific Daylight Time

If you haven't taken a look at Channel 9 yet you should. My favorite part of channel 9 is the video clips done by Scoble. The latest one is a tour of MSDN with Chris Sells. Very fun to watch. If you've never been on campus before then these videos will give you a pretty good feel of what it's like.

My only question is, why does swallowing the red pill cause people to preface every answer to questions with the word “so“?

 

SharePoint Training

Posted in SharePoint at Thursday, April 15, 2004 11:30 PM Pacific Daylight Time
[via Don Box]
Fellow Band on the Runtime member Ted Pattison is doing SharePoint stuff over at Barracuda.
 
Alas, too many SDKs, not enough time… 
This looks like something my group might be interested. It's too bad we just missed the Hands-on classes, but the online classes might work out well for us.

Microsoft links Outlook to Lotus

Posted in General at Tuesday, April 13, 2004 10:01 AM Pacific Daylight Time

[via Neowin]

Microsoft released a software add-on Monday for the 2003 and 2002 versions of Outlook that allows the e-mail clients to work with a server running IBM's Lotus Domino software. The Notes Connector is available now for free download. It works with versions 5 and 6 of Domino and allows Outlook to retrieve messages, calendar items, address book entries and to-do lists stored on the server.

I'm now running Outlook 2003! I've been using Lotus Notes for the last three months and I'm more than ready to go back to Outlook. This is very cool.

Adventures with Cassini, HTTP.SYS, XPSP2

Posted in at Tuesday, April 13, 2004 6:07 AM Pacific Daylight Time

Ok so recently I decided to switch my main development platform from Windows Server 2003 to Windows XP. This move was mainly inspired by the fact that I won't need to have IIS 6.0 to do web development in Whidbey based on what I saw at the PDC. So I decided to try to make the move a little early by setting up an XP machine and using Cassini as my dev server. I thought I would blog about my experience so far.

1) Setting up Cassini with VS.Net 2003.

This is where I hit my first snag. I setup Windows XP without IIS since I was planning on using Cassini. Next I install VS.Net and Cassini. I fire up Cassini and try to create a new web project. No luck. VS doesn't seem to work. Based on information I found in this post I was able to determine that I need to create a new project in an existing folder. Still no dice. Finally I figured out that I needed to install the web components in VS. So I had to install IIS, then add the VS web components, and then I uninstalled IIS. Finally, I can connect.

2) NTLM Authentication

Next snag was NTLM Authentication. Most of the web development I do now involves creating intranet sites, and guess what, Cassini doesn't do NTLM authentication. In my searches to see if anyone had extended Cassini to handle this I ran across CassiniEx which is a very cool project to extend Cassini that is run by Michael Carter (version .94 was released 4/2). As I looked into extending Cassini (or maybe CassiniEx) I remembered Don's post about using HTTP.SYS to handle ASPX processing. So I decided I should look into that before doing any work on Cassini.

3) HTTP.SYS and HttpListener

So Don's post on HTTP.SYS con ASP.Net sans IIS (note: his permalinks seem to be having some issues, the post is under march 2004) got me thinking. Maybe I could use HTTP.SYS to do my web development. So my first stop was the Web Transport's WebLog which had blogged about Don's post. I posted my question to them via the contact form asking if HTTP.SYS would handle the NTLM authentication or if that was an IIS specific feature. I quickly got a reply back that HTTP.SYS would handle this for me. Cool. So next it was back to Don's code so that I could try this out and if it worked maybe I would even install XPSP2 so that I could get HTTP.SYS.

Now Don's post says that he got it to “work against the Everett (.NET V1.1)”. So I assumed that I could take his code and do the same. My first clue that this was probrably not the case was the second line of code:

using System.Collections.Generic;

At this point I'm just hoping that he left that in there by accident when he posted the code. But then I come across the HttpListener class. I'm not familiar with this class so I google it and the only reference seems to be Don's post. I also googled the HttpListenerContext class listed in the code and the only result was Don's post.

Conclusion

So at this point I decided I've spent enough time on this for now and I just installed IIS. :)

Hopefully I will be able to figure out how to get the HttpListener class which, based on another of Don's posts, seems to be part of the Whidbey version of System.Net. So even though there will be HTTP.SYS in XPSP2, it will still need some kind of wrapper (so it seems) before it can be implemented in this way. I will probably end up playing with this some more in the future and hopefully I will get some better answers.

MVP Summit

Posted in Sql and Xml | General at Friday, April 09, 2004 2:15 AM Pacific Daylight Time

Well I'm back from the MVP Summit. It was a great event. I really enjoyed the time I got to spend hanging out with everyone and discussing techy things.

One of the highlights of the event was the WS-Sushi party organized by Kirk. That was a lot of fun.

Almost everything I would like to write about is covered under NDA, so you will just have to wait. Hopefully we gave MS some good feedback that helps them to make better products.

Xml File with Report Data

Posted in Reporting Services at Friday, April 02, 2004 3:31 AM Pacific Standard Time

A while back Tom Rizzo posted some thing to try out in reporting services. I tried some of them out and asked a few questions. Tom responded in the comments section on my blog (which I think is pretty cool) with some answers. Specifically Tom answered my question about the usefullness of Reporting Services export to Xml feature:

A couple of good reasons to do it. Perhaps you want to control formatting of the report and use XSLT to transform it to HTML. Maybe you want to batch feed the report into another system that only supports XML. There are a bunch more.

In addition to Tom's suggestions, Rohan Cragg posts on how he has found this feature to be useful as well.

... If however they are willing to do this in two steps (ok, you'll need to give them a bit of training) they can export to XML, then import in to Excel 2003 and they can pick and choose the fields they want to display, and where, because Excel can work out the schema all by itself (well, sometimes!).

Ok, so as a Microsoft MVP for SQL Server Xml, I agree having data in Xml format is a good thing. I guess my problem is not with the function itself but how it is implemented. If you export a report to Xml using this feature you get some interesting Xml. If I have a basic report with a table in it, and that table's name is Main, when I export that report I get some Xml that looks like:

<Report ...><Main ...></Main></Report>

So I write some XSL to convert this to HTML as suggested and everything is good. Now let's say the developer decides to rename the table from Main to Table1 (or I just want to convert a different report). Now what does my Xml look like?

<Report ...><Table1 ...></Table1></Report>

How does my XSL run now? It probably doesn't run at all. I could probably create some XSL to could handle this by checking the elements location instead of its name, but what if the developer adds two tables to the report.

When I first saw this feature I was very excited because I thought I would be able to really extend the existing reports by creating XSL views of them. However, I quickly ran into problems since it meant I would have to really pay attention to how all the report developers named the objects in their reports. When you also factor in how everything is auto named based on field names, etc, you get some very messy Xml that isn't easy to write XSL against.

Now I'm not saying that the Reporting Services team did it all wrong and I'm sure they thought through this quite bit. However, it seems to me that it would make a lot more sense to have some kind of standard output format that conforms to a single schema rather than a separate schema for every report that is created. The report always maps to a Report element, so why doesn't a table always map to a Table element (with a name property)?

So that is why I didn't find this feature to be very useful. But, as always, I could be stuck inside the box of my own thinking and not seeing the bigger picture.

BizTalk 2004 SDK Refresh Released

Posted in BizTalk at Friday, April 02, 2004 1:02 AM Pacific Standard Time

Get it here.

Note: It seems you must have BizTalk 2004 installed in order to install the SDK.