Thursday, August 30, 2007

Hotfix City

I was cleaning up a portal today and ran into a wide variety of errors. Several of them turned out to have the same root cause. The symptoms were wildly varying event errors like this:

Microsoft.Office.Server.Administration.ApplicationServerAdministrationServiceJob threw an exception:

"Message: Old format or invalid type library"

Or

Microsoft.Office.Server.Administration.ApplicationServerAdministrationServiceJob threw an exception:

"Message: Not Enough Storage"

The resolution required not one but two hotfixes, both of which target the .NET 2.0 framework.

I applied the first hotfix, KB 923028, after reading about it on Gayan Peiris' blog. After rebooting I started getting another error which read:

Unable to open shim database version registry key - v2.0.50727.00000

Err....what?  Speculating wildly here (something I'm good at), I'm guessing the "shim database" was holding some key in the windows registry that the Timer job wanted to know about but couldn't get at.

Anyhow, I did a quick search to figure out how to resolve it, and there again Gayan's blog popped up, suggesting I download and run KB 918642. I figure he was two steps ahead of me the whole way!

By the way, Gayan's the new SharePoint MVP at Microsoft in Sydney. Check out his blog here. It's got a huge amount of SharePoint fixes. Hopefully you won't need any of them!

Tuesday, August 28, 2007

LINQ To SQL ≠ N-Tier Architecture?

Ok, LINQ seems pretty wonderful, and I can see the utility of it - and with regards to SharePoint we'll probably end up using it when Microsoft comes out with an official LINQ to SharePoint, or the community does. Heck, it may even replace (or at least abstract) the dreaded CAML code....

With LINQ, you get:

  • Design-time type checking of data queries
  • Code syntax that is consistent across objects, XML, and relational databases (and, really, anything else)
  • Queries are integrated right into the .NET code frameworks
  • Full IntelliSense support when coding what used to be a bunch of string values

All good stuff. But what I don't understand is how it fits with traditional N-Tier architecture. Is this more of the same old "you can write 70% less code if you don't mind your web page directly accessing your database" trend, or can we actually use this in a properly architected application?

I've been following along on Scott Guthrie's blog; he has a detailed series of posts about using LINQ, but he hasn't yet addressed where LINQ might fit in the N-Tier architecture.

Rick Strahl is already blogging about some of the limitations when trying to combine LINQ in the middle tier with POCO (Plain Old C# Objects), but it isn't clear if this is a beta 2 problem or a design limitation. Here's his conclusion:

The more I look at LINQ the more I'm coming to the conclusion that using LINQ in a middle tier - especially in a generic business object architecture - is not going to work well. There are many little problem issues that when all added up point at more problems being created than solved by the entity generation and easy CRUD layer.

Manning Publishing's upcoming LINQ In Action book has an early access program that lets you download PDFs and view chapters as the authors write them (the ink stains on the PDF are still wet!). It's a great read. Incidentally they're the folks behind LINQ to Amazon, which was one of the earliest LINQ extensions that showed up on the 'net.

Chapter 14 of this book talks about how to integrate LINQ into applications. It makes the point that LINQ "out of the box" is more of a Rapid Application Development tool - which goes back to the "70% less code" approach Microsoft is so fond of. However, the book suggests specific concerns that should be noted when trying to use LINQ in a DAL in the traditional way.

LINQ defers calls to the data source until the query is enumerated over, so if you're returning entities or queries to a business layer (in the format IQueryable<Whatever>, you might be introducing a dependency (your business layer is actually calling a LINQ to SQL query at runtime). You can turn off lazy loading by setting the DataContext's DeferredLoadingEnabled property to false, and also by explicitly loading an entity's child records before returning the entity from the DAL.

So, if I understand correctly:

  • LINQ to XML or XSD doesn't seem to foil N-Tier architecture, since LINQ is essentially a tactical solution to make it easier to read or write XML, and doesn't cross any layer boundaries.
  • LINQ to Objects is an in-memory LINQ technique, that again shouldn't cross any boundaries. Your method signatures will consume and provide POCO, and you use LINQ to iterate over them completely within the Presentation or Business layer. So far, so good.
  • The real issue seems to be LINQ to SQL...and issues can be avoided if the LINQ to SQL code is confined entirely to the DAL, lazy loading is turned off, and disconnected entities and lists are passed back out of the DAL to the Business tier. But if you do this, the only apparent advantages of LINQ to SQL over any other ADO.NET data access method is the consistency of coding and the design-time safety of the strongly typed queries (which you can already get with Strongly Typed Datasets!)

Please, can anybody help me understand what an appropriate N-Tier architecture might look like with LINQ To SQL?

Sunday, August 26, 2007

News Bits

Various bits of news and rumours:

  • K2 blackpearl has gone RTM. You can download it from the K2 website after logging in with your user ID. Chris O'Connor is already blogging about the installation...check out his blog postings here and here.
  • There is a SharePoint Asset Management Toolkit currently in beta at Microsoft. The intention behind it is to give an organization an overview of all the SharePoint assets floating around on the network: WSS sites, portals, etc. The idea being to get more of a handle on what's being built out there by enthusiastic 'Pointers.
  • If you're keen on exploring LINQ, you'll probably be interested in the LINQ to SharePoint community project. Led by Bart De Smet, C# MVP, the project is currently in Alpha 0.2.3 (which is only four light years away), and the aim is to provide some of these following features (taken from Dimitri Clement):  

    • Custom query provider that translates LINQ queries to CAML, the Collaborative Application Markup Language used by SharePoint for querying.

    • Support for LINQ in C# 3.0 and Visual Basic 9.0.
    • Entity creation tool SpMetal to export SharePoint list definitions to entity classes used for querying.
    • Visual Studio 2008 integration for entity creation (a.k.a. SPML).
    • Can connect to a SharePoint site either using the SharePoint object model or via the SharePoint web services.
    • Planned support for updating through entity types.

    You can find the bits at LINQtoSharePoint on CodePlex and the team blog is at http://community.bartdesmet.net/blogs/LINQtoSharePoint/.

Friday, August 24, 2007

SharePoint Development Tips

Mark Jones has an excellent post on best practices for SharePoint development on his blog, entitled "Sharepoint 2007 - Development and Engineering Practices".

He places a great deal of emphasis on team development using Visual Studio and some sort of source control. He further recommends that as much as possible is automated via scripts or code to ensure that you can minimize the deployment effort, avoid errors, and manage code changes and versioning.

He's also a fan of test harnesses and not relying on SharePoint's object model to do all the testing for you.

One of my favourite tips is his advice to customize as a last resort. Although the SDKs and tools are getting better, there's a lot that can be done by configuring what's already out-of-the-box long before any code needs to be touched.  Office 14 is already underway and while I'm sure Microsoft will try to minimize the changes, it's a sure bet that enough will change between versions to make upgrading a roll-your-own portal painful. In fact it's amazing how much can be achieved with just a little thought about the existing features.

Mark's got a lot of great conclusions and ideas in there. I encourage you to check it out.

Thursday, August 23, 2007

The Microsoft SharePoint Team has just announced a major release to the SDKs - version 1.2 of each. The full details are here: http://blogs.msdn.com/randalli/archive/2007/08/22/just-published-major-update-to-the-moss-and-wss-downloadable-sdks-8-22-2007.aspx

This is wonderful news - not only will it make developing to the object model more understandable but there are numerous samples and, best of all, a BDC editor tool! Check this page out:

The Business Data Catalog Definition Editor provides a visual tool for creating an Application Definition for BDC in MOSS 2007. Features include:

  • Underlying XML is abstracted by the design surface and properties window
  • Drag and drop web methods, tables, or views to create line of business (LOB) connections.
  • Entities and methods are created automatically from database metadata and WSDLs.
  • Additional method instances can be added to further enhance the database or web service connection.
  • Method instances can be tested from within the tool, enabling incremental development of LOB connections
Background

Currently, writing an application definition to connect the BDC to a LOB system is a manual process. This requires an understanding of both how the LOB system is configured and what must be included in the XML to satisfy the BDC. Having a tool to simplify this process not only lowers the initial knowledge threshold for administering the BDC, it also lessens the required work of the user (such as testing, making modifications, etc.).

[...]

Highlights
  • Tool supports databases (SQL, Oracle, OLEDB, and ODBC) and web services
  • Drag and drop design surface for selecting DB tables or web methods:
  • Metadata is automatically extracted from databases by dragging and dropping tables
  • Web Services require a few additional steps to completely configure the connection
  • Users can import and export Application Definition XML files
  • Users are able to test method instances incrementally from within the tool
  • The tool is not required to run on a web front-end
  • Associations are created automatically when foreign keys are selected; they can also be created easily for web services by adding an Association method instance

Objectively speaking, the SDK and tool release just made life 100% easier (32.4% of the time)1. You can't argue with those numbers!

The Microsoft team has also stated that it has doubled the resources working on the SDKs, which means we can look forward to even more improvements in the future. They're obviously listening to the community -they even make that point in their announcement - and it's very welcome indeed.

1 Actual results may vary.

Wednesday, August 22, 2007

Yesterday the August meeting of the Sydney SharePoint User Group was held. The speakers were Han Duong and Brad Saide, both from LivePoint.

Han spoke about Themes in SharePoint. He gave a quick example of how to change a SharePoint site using the existing Themes in a MOSS installation, and suggested some tools, such as Heather Solomon's blog posting about all the SharePoint CSS classes, Firebug and web developer plugins for FireFox, and the IE Developer toolbar for Internet Explorer.

He then showed how to customize Themes using SharePoint Designer.

  1. The first step is to copy an existing theme set from the Office 12 hive on the file system (<installation drive>\Program Files\Common Files\Microsoft Shared\web server extensions\12\Templates\Themes) and rename it to the name of the new theme you want to create ([Theme Name]).
  2. Next, modify the name of the .info file contained within this new theme folder to be the [Theme Name].info. This is what SharePoint will look for when it tries to load your theme.
  3. Next step is to tell it about the new theme: to do this go to 1033 folder in your hive and edit the SPThemes.XML file. Just copy one of the existing Theme element entries and rename it to reference your theme name.
  4. Create a thumbnail for the them by copying an existing theme thumbnail from \Templates\Images and renaming it to whatever you specified in the SPThemes file.
  5. Reset IIS
  6. View the Themes in the Site Settings section of the SharePoint portal. You can apply your new custom theme to a site and then take a screenshot; you can then use this as the thumbnail to overwrite the one you copied.

An interesting point Han mentioned is that when you make changes to Theme css files in SharePoint Designer, it is making those css changes to the database. However, on initiation or cache expiry the the css and other theme information is loaded directly from the file system.

Therefore any changes made subsequently to the file system will overwrite the work you've done in SharePoint Designer. So he suggested that once you've made any changes you are happy with in SharePoint Designer, immediately copy them into the file system version of the css file to protect yourself.

Han mentioned that Themes are a good fit for simple design changes to the look of a SharePoint site, while Master Pages are more useful when changing the functionality of the site or the way it is organized.

Another good tip that was raised was using Themes as a quick way to mock up a client's site when doing demonstrations as it helps them relate to what they are seeing (rather than just viewing the out-of-the-box SharePoint installation).



Next, Brad gave us an overview of the Business Data Catalogue in MOSS 2007 Enterprise, and showed how it can be used to surface Line-of-Business information and provide dashboard views of disparate data.

He first explained the challenges of integration line-of-business code into SharePoint. These challenges include:

  • Developers have to write integration code
  • They communicate directly with the native API
  • Each attempt is a single-purpose effort
  • There is an ongoing maintenance and update burden
  • It is difficult to create "one place to go" when there data is in many places
  • Each new business system requires new effort

The BDC is a good solution for all of this. It provides a unified, consistent way to expose data within SharePoint by surfacing it from backend applications. It is declarative, requiring no code. It is also a centrally-managed system. I would also suggest it is "universal" or a contract that all developers will follow.

Brad pointed out the things the Business Data Catalogue is good for: surfacing LOB data, mashing up the information from multiple Line of Business apps, searching on all of this data at the same time, and pulling LOB data into libraries and lists where it can add to the existing portal information, and it can help populate user profile information.

What it isn't good for:

  • It isn't a replacement for existing Line of Business functionality
  • It isn't transactional or a message broker (a la BizTalk)
  • It doesn't do data transformation (SQL SSIS)
  • It isn't a data adapter (iWay)

Brad then presented a demo of how the BDC could surface HR system data stored in Oracle onto the MOSS portal. He showed how easy it was to use the free version of BDC Meta Man to build the initial BDC app schema file, and showed us how the BDC web parts could present and filter this information.

Updates to the Oracle database appeared upon refresh, and the BDC information could be added as metadata to existing lists and libraries. BDC Actions allowed HR staff to view the profile data of the records or even launch custom actions using URLS, such as launching a window to search on Seek.com for job candidates when a particular job title was selected on the SharePoint page.

I think the BDC is one of the best features of Office 2007 with some of the least documentation and fewest tools! A painful paradox.

Both Brad and Han did a good job and I think we all enjoyed the session.

Saturday, August 18, 2007

If you're Sydneyside then you may be interested in attending two events this week: the .NET User Group and the SharePoint User Group.

The SharePoint User Group meets at 5:30pm on Tuesday at Unique World's office on Level 14, 24 Campbell Street.

Come along to the August User Group to hear Brad Saide from Live Point presenting on Business Data Catalogue and participate in a discussion with Han Duong from Live Point on customising SharePoint Themes.

In addition to Brad & Han's presentations, Jey from K2 will showcase K2 blackpearl, the BPM product from K2, and how it extends MOSS 2007.

The .NET User Group will be meeting at the Microsoft campus on Epping Road in North Ryde on Wednesday from 5:45pm to 9pm. This month Johann Kruse will be talking about Office Communicator 2007. You can find out more information about the event here.

I'll try to get more information on how exactly Office Communicator hooks into SharePoint (presumably through the existing "presence awareness" feature and there is also an API that can surface a lot of information).

Hope to see you there. Happy coding!

Tuesday, August 14, 2007

SharePoint Performance and Capacity Planning

This is based on Mike Fitzmaurice's seminar at Tech.Ed Gold Coast.

Mike defines capacity planning as the

"art of evaluating a technology against the needs of an organization and making educated decisions on how to meet those needs"

 He stresses that it is still more of an art than a science as there are a lot of imponderables and value judgements.

In terms of organizing the planning, he suggest breaking the process up into four distinct phases. These are:

  1. Phase 1: Plan for software and hardware boundaries.
  2. Phase 2: Estimate performance and capacity requirements
  3. Phase 3: Plan hardware and storage requirements
  4. Phase 4: Test, test, test your design

Phase 1: Plan for software and hardware boundaries

First you have to evaluate the environmental and the existing and possible limitations of the solution.

In SharePoint, hardware scalability isn't a huge concern as you can scale up and out by adding more servers over time. The big impact is in the software; specifically how you configure SharePoint and how you use it.

The main SharePoint objects to care about at this stage are: Sites, People, Search, Logical Architecture objects, and Physical objects (search indexes, physical database file and transaction logs).

These objects can all grow quite large, and SharePoint is designed to handle that to a degree, but there are finite limits. After running through multiple load-balancing tests, Microsoft has come up with the following benchmarks, beyond which performance suffers.

SharePoint Element Recommended Limit Negative Impact On
# Site Collections in single content db 10,000 General SharePoint performance
Site Collections 50,000 per web app General SharePoint performance
Content Databases 100 per web app General SharePoint performance
SharePoint Sites 250,000 per site collection General SharePoint performance
Shared Service Providers 3 per farm TOTAL Asynchronous processing, very seriously recommend you do not use more than 3 SPs per farm!
Indexed Documents (in Search catalog) 50,000,000 per search index Indexing and querying performance
Web Servers 8 web servers per database server General SharePoint performance
Maximum # documents in flat document library not using folders or indexed views 4,000 Searching, viewing, and navigating document library
Maximum # documents in document library using folders or indexed views 1,000,000 Library performance
Maximum # documents per nested folder in document library 2,000 Library Performance

 

Some other helpful tips:

  • Don't create too many distinct web sites in IIS when you are installing SharePoint for the first time, as the application pools and websites use up CPU resources
  • Create usage profiles to try to track common users and estimate how their daily actions might consume resources or generate content
  • Mike really recommends that you turn on IIS compression.
  • BLOB Caching is a feature that isn't well known in SharePoint but serializes large objects to disk on the Web Front Ends to avoid database round-tripping.
  • You can delay loading the core.js file if your users will be anonymous or readonly...instructions are here.

Phase 2: Estimate performance and capacity requirements

The requirements are a compromise based on your desired performance, capabilities, and resources.

Remember to plan for peak concurrency since your farm could be humming along most of the time until peak demand brings it to a crawl.

Microsoft recommends 64bit hardware to improve performance, but be aware that many applications and utilities have problems with this: PDF Filter is one example (although there is now a 64 bit version). MOSS supports mixed mode where your database server is running on 64bit hardware while the Web Front Ends are on 32bit boxes.

Phase 3: Plan hardware and storage requirements

If you're going to be storing your file system in SharePoint, multiply its size by 1.2x to 1.5x to reflect the additional metadata, storage, and related costs to that. Your database will have to accommodate at least this much space.

Plan an additional 30% to be the size of the Index for all content indexed for a single server.

If you have a dedicated Query Server multiply the Index size by 2.5x.

If you're enabling BLOB caching in SharePoint, remember that it is serializing large objects to file, and you will have to have room for this.

Plan for future database growth!

Phase 4: Test, test, test your design

You should definitely test the performance under a variety of scenarios.

To do this, first establish goals for your tests. Good goals might be to mimic the standard user profiles you created in Phase 1 at peak times and see how the SharePoint farm responds.

Create a test farm that closely mimics the production farm. I think it's ok to use virtualization as it will cut down on the number of physical servers you need. Populate the test farm with data that is representative of the real thing. You could try exporting existing content databases from a development or production farm into the test farm. There is also a data population tool available on (where else?) CodePlex

You can use Perfmon counters to assess responsiveness. There is a tool called Fiddler that can help you analyze the requests and responses between SharePoint and the client which can help figure out what's going on.

SharePoint can be very complex because there are a lot of moving parts. One of the most important aspects of a proper MOSS deployment is planning for future capacity and performance, and these tips by Mike are a great help in that regard. The white paper that discusses many of these issues in greater depth is available here.

Monday, August 13, 2007

Tech.Ed 2007 Gold Coast Day 3

The third and final day of the conference was more of a wind-down, but still had tons to see and do. I caught the last bit of the first session on Rules Engine Use in Windows Workflow and then made my way to the seminar on Best Practices for Team Based Software Development. This was mostly about how Team Foundation Server can help organize the common artefacts and work habits of a development team.

The speaker, Anthony Borton, covered the signs of a great team and mistakes that teams can make as well. He then displayed a chart of various stages of the Team Development Maturity Model he had developed with a teammate of his. The stages were:

  1. Chaos
  2. Shared - standard location for team documents and not much else
  3. Tracked - use of standard forms and code
  4. Defined - improving level of automation for common tasks
  5. Instrumented - stabilized metrics and repeatability
  6. Optimized - continual process improvements, tool refactoring, ever-increasing automation

I think this is a fairly descriptive overview of the stages of collaboration a team can achieve, although I'm not sure how concrete or prescriptive it is.

Anthony also talked about the requirement to do code branching and best practices such as structured feature decomposition (which I guess means breaking down the business requirements into smaller units of fertilizer).


Next up, Mike Fitzmaurice talking about Capacity Planning for SharePoint. Lots of interesting information here, which I'll cover in more detail next week. The gist of it was that capacity management is an art, not a science, although there are plenty of metrics in it.

He described a way of dividing a capacity planning study into discrete phases so that issues could be dealt with in a systematic and holistic way.

The first phase is to plan for the software and hardware boundaries. The next is to estimate the performance and capacity requirements. Phase 3 is to plan hardware and storage requirements; and the final phase is to test the design.

Again, he had some great facts, figures, and tips which I'll cover in greater detail in a dedicated post later on.


The final seminar of the day was given in a packed room by self confessed "language geek" Joel Pobar. He's a former Microsoft Program Manager on the common language runtime (CLR) which underlines everything we do .NET-wise. 

This presentation was a virtuoso overview of computer languages features and trends...especially the combination Microsoft is trying to achieve between Static languages, Funtional languages, and Dynamic languages.

Hopping into PowerShell command windows, Joel created language demonstrations on the fly using a variety of languages interchangeably, including JavaScript, IronPython, and the next version of VB.

At one point, to a huge ovation, he created a LINQ to SQL statement that generated xml items in its select element, which were then added to an RSS XML fragment that hadn't even been declared yet, but would exist at runtime!!!!

Now I'd love to talk more about this, but it was so far over my head, I felt like the man-ape trying to figure out the Monolith (but hey, at least the ape figured out tools right after that!)....It was just one of those moments when you know something special is happening but you're not quite sure what it is. So I just went "woooo" when everyone else did, and nodded my head emphatically at various regular intervals :) 

Guess I've got some more lurnin' to do!


After this, all that was left was the wrap-up. The beginning of the end was a locknote speech by the head of the Computer Forensic team of the Australian Federal Police, and then it was time to say goodbye to Frank Arrigo (who is heading to Microsoft Corporate in America), and watch Alan Coulter win $20k worth of prize swag (no idea how he got it home!).

I'm looking over my (long!) blog postings of the three days of Tech Ed - it's amazing how much learning, networking, and fun can be crammed into such a short space of time. This was my first development conference and definitely not my last! Kudos to all the organizers for such a great job.

Sunday, August 12, 2007

Tech.Ed 2007 Gold Coast Day 2

Day 2 was a full one. Software + Services and Web 2.0 in the Enterprise were two seminars given by Michael Platt but again I won't report on these since I'm going to blog about all the Web 2.0 stuff in detail later this week.

Enterprise Library 3.1 was given by Tom Hollander, who has left Patterns and Practices Group and moved back to Sydney. He stated the goals of 3.x were to:

  • Address developer feedback;
  • Provide new blocks to address standard issues, such as Validation and seperating policy from business rules;
  • Support integration with the new features in the .NET 3.0 framework;
  • Simplify development of new application blocks;
  • Retain backwards compatibility with EntLib 2.0

He demonstrated the major new blocks, showing how easy it is to create validation rules programmatically, using attributes, or via configuration files. The validation block allows developers to specify the rules once and have them apply everywhere - in the presentation layer as well as throughout the other tiers.

The Policy Injection block is a neat idea, since it allows the separation of cross cutting concerns such as caching, logging, and validation - in short managing many of the other blocks in the Enterprise library.

One potential issue is that any code using the Policy Injection block has to inherit from MarshallByRef, which means if you have a custom framework inheriting from a custom base object, you may not be able to use this block.

He also demonstrated WCF integration and the use of Handlers to place Enterprise Library functionality in the IIS pipeline, which is very convenient as you can sit them in front of any application without writing any code.



The next seminar was again given by Tom and was another conference highlight for me. This one was on Software Factories, a topic I am very interested in and that we will be hearing much more about.

Tom explained the challenges faced in solution development by describing the "gap" which exists between the general purpose frameworks provided by vendors such as Microsoft, and the solutions developers build upon these frameworks. The gap incorporates things such as business knowledge, requirements, architectures, technology decisions, and so on, and represents the additional effort that has to be undertaken with each new project.

Development teams currently bridge this gap with skilled labour, but this costs a great deal, takes a long time, introduces risk to the project, provides relatively poor quality and consistency, poor traceability, etc etc. Basically we know that the current development process is difficult and crude!

The software factory attempts to close the gap as much as possible by combining software automation (code generation) with best practices and guidance (domain expertise). As he defined it:

It's a collection of tools and assets for building applications of a particular type. When turned on in a development environment, it provides a guided experience to the members of the team."

The software factory concept is related to Domain Specific Languages (DSL) which are really languages or code targeted for a specific domain, such as Banking or Medicine.

Tom demonstrated the use of the Patterns and Practices group factories that currently exist:

  • Web Client
  • Smart Client
  • Mobile Client
  • Web Service

Microsoft is hoping to bundle software factories into the next version of Visual Studio (after 2008). As an early demonstration of the "factory of the future" he showed us the "Modelling Edition" (the name makes it sound like a Fashion Industry DSL).

Using a UML- or class designer-style interface, he dragged and dropped units of functionality onto a canvas to create an abstract model of a solution, and was then able to generate the code from it. It's a compelling vision and where the industry needs to go to reach the modern ages of computer development, but we're still a long way off.

For more information, check out the service factory on CodePlex.



"Stranger in a Strange Land: The Adventures of a SOAP Fan" was a fascinating and often hilarious exploration of the web as a middleware platform, given by Dr. Jim Webber whose blog is World Wide Webber (no relation to the World Wide Web).

Dr. Jim discussed the major architectural characteristics of the web:

  1. It is scalable
  2. It is fault tolerant
  3. It is recoverable
  4. It is secure
  5. It is loosely coupled

He made the valid point that these are precisely the virtues that we would seek in any major enterprise application. Having concluded that the web's nature makes it a desirable platform, he proposed that we should be building our systems using its fundamental nature.

He had several proposals:

  1. Links are good. In fact, if the web itself is a state machine workflow (resources are "states" and requests are "transitions") then links represent state transitions.
  2. HTTP verbs provide all the basic functionality an application needs: Create, Read, Update, and Delete actions, plus metadata provided by HEAD and OPTIONS verbs and Status Codes that assist processing by letting us know the result of our method calls.
  3. Microformats will create the small "s" semantic web. Tim Berners-Lee's vision of the Semantic web will never be realized since it ignores the fundamental reality that the web is a chaotic human creation, but microformats are entirely achievable and can provide real context and knowledge.

He then gave a funny example of how purchasing a Starbucks coffee could be achieved using this model. I still think this development model is a stretch since the tools and development mentality don't really support this yet, but as a thought exercise his ideas were top-notch.



The final seminar was "Building a Complete Web Application in ASP.NET 3.5 and Visual Studio 2008 (Part 2)" given by Jeff King. He gave a great presentation, creating functionality on the fly. He spent a lot of time demonstrating the powerful new JavaScript support that's available. Although he did a fine job I still remember writing JavaScript code during the Browser Wars and in spite of the advances I think I will need to give it a few more years before the mental scars heal :)



After that it was time for Movie World. Although I only had time to ride in the Batman Drop-thingy and on the Superman roller coaster once both were great. Superman especially was crazily fast. After the park closed (pretty early, really) I headed out to Surfer's Paradise with a couple of Microsoft guys, Richard and Shannon. We ended up leaving when everything shut down at 4:30 and I crawled into bed at 5 for a couple of hours of shut-eye. It's a hard knock life for us programmers!

Friday, August 10, 2007

Tech.Ed 2007 Gold Coast Day 1

So Tech.Ed 2007 came and went on the sunny Gold Coast and over the next few days I'm going to be sleeping and blogging (not necessarily at the same time). Here's my recap of the first day:

The keynote speech was given by Michael Twigg, the Production Resource Manager of AnimalLogic, an Australian animation firm that makes special effects for movies, TV, and commercials. Some of their achievements include effects work for the Matrix (the ghost twins), 300, and Happy Feet. It was an interesting example of IT in an industry I'm not familiar with.

They can spend months designing a shot that will be used for two seconds. As one example Michael showed us how many levels of matte painting, 3d effects, and live shots that all had to be pasted together to create a two-second shot of Xerxes' army crossing into Greece for the movie 300.

We also learned the surprising fact that all their servers run Windows NT! Well, at least he didn't mention the "L" word (no, not that "L" Word - the other one).


The first seminar I attended was "Lap Around Visual Studio 2008" by Tony Goodhew, the Product Planner for the product. Tony described the major goals of the product version, namely language advances, web development improvements, and increased support for AJAX and JavaScript, and CSS.

He covered some of the neat new features in VS2008, including framework targeting, nested master pages, the CSS property windows, split pane view, and LINQ to SQL.

Other things he touched  on:

  • Hitting CTRL while using IntelliSense makes it translucent so you can see the code that is below it.  Highlighting code while hitting CTRL-K-D formats it, which  is useful when you are pasting XML or HTML fragments into a page.
  • Tony also talked about XAML and how Visual Studio is the developer's platform, while XAML presentation can be handled in the new Expression suite that Microsoft has released. XAML separates the concerns so both roles can work in parallel and neither overwrites the other's work.
  • There is a service editor in Visual Studio for WCF projects. Not sure if this is included in the previous WCF extensions toolset but it looks quite useful.
  • There is support for local data caching using the SCCE-based local data cache. This is intended to help provide a standard solution for applications that may have infrequent access to a network. Tony demonstrated how to setup the wizard driven cache and then showed an app connecting and disconnecting to prove that the local cache would synchronize properly.


"Project 2007 Timesheets and Reporting" was the next seminar on the list. It covered the various options for using Project 2007 and Project Server 2007 for creating timesheets, assigning and tracking human resources, and presenting and reporting on this information. 

As expected, the new version of Project Server 2007 uses WSS 3 project sites to host Project actions, plans, and reports.

When an employee logs in to the Project Timesheet site, he is presented with a series of links, grouped into "Tasks", "Timesheets", "Approvals", "Status Reports", and "Issues and Risks".

Each employee has a "My Timesheets" view, which seems to be a list using a custom content type. The content type allows him to create a new Timesheet entry, and the items in the entry can be divided up into project- and non project-related tasks (for administration, research and development, or other purposes). There is also the obvious notion of billable and non-billable hours.

Project Managers and Timesheet Managers get top-level approval and rollups of all the timesheets and project plans, as you would expect from such a tool.

Managers can also view predictions of future timesheet entries, for instance when an employee submits a request for leave. On the My Timesheets screen there is a handy little icon that shows whether a vacation request has been approved or not (it looks like a KPI).


Next I attended a seminar called "Web 2.0 Programming", by Microsoft's Director of Web 2.0, Michael Platt. I found this to be so insightful and it dovetails so well with a couple of other seminars that I am going to blog about those separately.


After the last seminar of the day (using Dynamics CRM 4 for handling events - basically hooking CRM up into your business platforms) I headed off to the K2 Underground Party, hosted at Onyx just down the road from the Conference Centre.

At the party I finally had a chance to meet some of the people whose names I have heard or blogs I enjoy reading. I met Jey Srikantha and Chris O'Connor, aka "Grumpy Wookie" (who turns out to be a fellow Ontarian by birth!) , and I also met Anthony Petro who I had previously interviewed and Chris who worked on programming K2 against the SharePoint object model (I'm sure he enjoyed it!).

Some of my colleagues from Dimension Data - Jeremy Hancock, Ben Johnston, and Alan Coulter - were also there, and we caught up with some former team members, amongst them K2 Insider Bruce Swiegers (who blogs at http://k2insider.blogspot.com) and Ian Newark, who is a Business Development Manager at K2 now.

It was a good group of people and lots of fun. Best of luck to K2 with their product; it seems like they are seeing a lot of interest from the dev community so far. I even saw one Tech.Ed attendee covered head to toe with K2 Underground stickers - I guess you could say he is committed (or should be)!

Monday, August 06, 2007

I'm lucky enough to be spending the rest of the week at Tech Ed 2007 Gold Coast, courtesy of my managers at DiData.

The seminars this year seem very interesting - there's a lot of coverage of Web 2.0 concepts. Microsoft is focusing heavily on what they call Software + Services (it's kind of like SaaS with some math, I guess) and there is heavy emphasis on this in the lineup.

In particular I'm interested in learning more about Enterprise Library 3.1 and Software Factories from Tom Hollander (formerly of the Microsoft Patterns and Practices Group). Among the scores of other sessions are ones on workflow, the latest version of Dynamics CRM, capacity management planning for SharePoint, and Business Intelligence.

There is also a Blogger's Lunch about Web 2.0 trends with some guest panelists and questions from the floor. Should be a lively discussion.

I'm looking forward to learning plenty of new things and meeting lots of my fellow super-caffeinated developers. I'll try to blog while I'm there.

Saturday, August 04, 2007

Currently Team Foundation Server uses Windows SharePoint Services sites to host project sites. This provides a useful central point to collect development documentation, workspaces, meeting spaces, and allows people without TFS to at least view the related information (although now there is an additional option, the very cool TeamPlain Web Access for Team System, which is free for all licensed TFS users!).



Unfortunately the current version only supports WSS 2.0 sites. There are some work arounds, but they can be flaky.

The good news is that TFS 2008 will support WSS 3 sites - Abdelhamid Abdou talks about this on his blog in this posting. Better yet, when you setup TFS you are now given the choice to have the SharePoint integration on the same box, or point to an existing SharePoint application on another server.

To learn more about the current workaround, you can read Brian Keller's blog posting here. Brian is a Technical Evangelist for TFS and makes it clear that any modifications to current TFS installations to make them work with WSS 3 are hacks and may not be supported. He does point to Mike Glaser who blogged extensively about how to upgrade to WSS 3 sites.

Incidentally, as of July 30, Readify in Australia has released TFS hosting "in the cloud"! This could be very huge for small ISVs or consultants that want to have enterprise development tools without the upfront expense or maintenance effort! The service starts at $995 AUD a month and more info can be found at http://www.tfsnow.com/.

Friday, August 03, 2007

Microsoft's SharePoint Team Blog published a fascinating case study on the development of Glu Mobile Games's website a few weeks ago...The website was built on MOSS 2007 but with some significant rejigging to support some distinct needs: namely support for multiple cultures, fast load speed, streaming flash videos, and mobile devices.

The blog entries are here:

  • Part 1: Overview of the Solution
  • Part 2: Master Pages and User Controls
  • Part 3: Performance - Cache is King

Glu Mobile shows what can be done using SharePoint as a product and SharePoint as a platform. The company creating the Glu Mobile website, AllIn Consulting, used MOSS as the basis for the site's content publishing and lists functionality, and then built their own custom ASP.NET controls and features to meet the stringent functional requirements. To quote them,

Our design philosophy from the start was to integrate the best of ASP.NET 2.0 and MOSS 2007 to implement a highly functional, manageable, and scalable site within a short amount of time.

Their attention to detail was so exact that they actually created a special "whitespace filter" - an HTTP Module I guess - that stripped out the whitespace in the HTML code so that when a page was downloaded it was an average of 5k smaller! Other examples of their exacting care are their server-side code that streams JavaScript that is targeted to a user's browser for maximum browser compatibility, and a custom-built "variation" engine to simplify maintenance of localized site content.

As their company name suggests, AllIn were prepared to gamble on the core feature set of SharePoint and their ability to recognize the areas where the project required them to strip out existing functionality and roll their own code.

What they created, from the evidence of the blog case study and the website itself, is more than the sum of its parts. Congratulations to the AllIn team for providing such a lovely example of cutting-edge development on the MOSS platform. I hope the SharePoint blog features more examples like this in the future.