Showing posts with label web services. Show all posts
Showing posts with label web services. Show all posts

Wednesday, June 05, 2013

Choosing and Using Cloud Services with SharePoint

Here’s a copy of my presentation for the SharePoint Summit 2013 in Toronto. I spoke about tips and tricks for evaluating and managing cloud services with SharePoint, including some common gotchas and considerations.

Because it was such a wide-ranging topic I tried to anchor it with the story of StoneShare’s own journey to the cloud. I like to keep my presentations “real world” Smile

I hope this is of value to someone – please feel free to contact me on LinkedIn if you have any questions about it.

Thursday, October 13, 2011

Stevey's Google Platforms Rant: Awesome Insight Into the Web Platform Wars

FASCINATING blog post from a Google engineer who used to work for Amazon.com. While the gist is a rant about Google+, the bits I dig are around Amazon’s transformation into a Service Oriented Architectural company (as well as hearing some insights about Amazon, a company I greatly admire).

Some insights about massive SOA:

  • “every single one of your peer teams suddenly becomes a potential DOS attacker. “
  • “if you have hundreds of services, and your code MUST communicate with other groups' code via these services, then you won't be able to find any of them without a service-discovery mechanism. And you can't have that without a service registration mechanism, which itself is another service. So Amazon has a universal service registry where you can find out reflectively (programmatically) about every service, what its APIs are, and also whether it is currently up, and where.”
  • “monitoring and QA are the same thing.” … “In order to tell whether the service is actually responding, you have to make individual calls. The problem continues recursively until your monitoring is doing comprehensive semantics checking of your entire range of services and data, at which point it's indistinguishable from automated QA. “
  • “Organizing into services taught teams not to trust each other in most of the same ways they're not supposed to trust external developers.”
  • “ Bezos realized long before the vast majority of Amazonians that Amazon needs to be a platform.”
  • “That one last thing that Google doesn't do well is Platforms. We don't understand platforms. We don't "get" platforms.“
  • “The Golden Rule of Platforms, "Eat Your Own Dogfood", can be rephrased as "Start with a Platform, and Then Use it for Everything."  "”

My whole love affair with SharePoint revolves around it being a platform. You can read the whole post here: https://plus.google.com/112678702228711889851/posts/eVeouesvaVX 

Hopefully they don’t yank it down!

Wednesday, October 05, 2011

SharePoint Conference 2011: Integrating Social Media with SharePoint Websites

These are my rough notes from an SPC presentation by Brian Rodriguez and Ryan Sockalosky. Awesome session – my favourite type – really matter of fact and presenting and solving issues clearly.

Leveraging Social Media Example

Examples of leveraging social media on Facebook sites.

Ticketmaster on FaceBook: Daily Deals, large following, evangelism, popular items. Popularity leads to large revenue.

Front and center placement, put the like button as part of purchasing experience. You go to concerts with your friends and your friends are on facebook.

What can you do to leverage Social Networks?

Help users “Like", Tweet, and share content. Reach a broader audience. Allow your customers to evangelize for you. Drive traffic to your site.

How to Do it: Facebook

Facebook Plugins – developers.facebook.com has plugins you can use.

iFrame or XFBML (facebook XML language)

Some options: Like button, send messages, embed activity feed

plugin on page.

Facebook OpenGraph Protocol – you can use Facebook insights to get insights into traffic patterns, who is liking what

How to do it: Twitter

Embed twitter RSS feed into your page, embed plugin to allow tweets from a page.

How to Do It: Other Social Networks

AddThis – social network sharing

Demos

Contoso eletronics website

Add “Follow” button in master page at the bottom. Added tweet and like buttons next to the content. Defaulted Tweet content makes it dead easy for a user to click tweet and share the link

By function of tweeting that page, it is now part of the “Social Stream” – given a bit more weight in search engines and enabled for discovery by followers of the user or his or her friends.

Created a custom webpart to allow users to tailor the text to configure the Twitter content dynamically. The page owner can decide that they want to automatically embed hashtags in any tweet via a webpart property.

Facebook webpart – same options – defaulting text, show like button, show activity feed, people’s faces etc.

Tip: Make sure the tweet or facebook or sharethis links go to the home page consistently, so set some buttons in the master page footer or header and ensure every page on the site shows the sharing buttons.

One of the limitations of sandbox solutions is you cannot call RegisterClientScript so webparts containing these facebook or twitter plugins cannot be handled as sandbox solutions. Also a separate mode is created for Design Mode to allow rendering in SharePoint Designer otherwise you will see the controls as broken in Design Mode.

Demo of modifying master page by overriding AdditionalPageHead content placeholder in order to inject OpenGraph metadata tags (title, site name, description).

Enable Engagement and Conversations

Allow Social Commenting

Educate others that the conversation is happening

Increasing brand awareness and credibility by inviting the conversation to your site

Customers can engage with you and with each other

Facebook: Login plugin on your page and activity stream

Twitter: Widgets around searching or surfacing profile on your website

Demo of wrapping code in content editor webpart for these

Enabling Engagement and Commenting

Governance is key – to keep engagement clean and managed

Store and publish content on Social Sites: Examples: YouTube, Twitter, Facebook “fan pages”

Increased visibiltiy and re-use

Customers can view/subscribe/join without ever seeing your site

Surface your external social content on your site

1 out 6 minutes people are on a social networking site

Send to Twitter: Workflow in Action

Content added to SharePoint List – triggers approval SP workflow, results in REST post to Twitter. Need to store oAUTH credentials in SharePoint 2010 secure store service to allow staff members to sign in and post via the company account.

Content of tweet needs to be approved before posting

Register an application with Twitter to use their API.

REST Based api using OWA to allow approval

Setup oAuth sending consumer key and secret to Twitter using the Access Token they give us once signed up for the REST based api (while logged in as the company’s Twitter account).

Now go into Secure Store Service and add new Group access (single version of the credentials) although you could have different Twitter profiles and those could be individual secure tokens. Define the attributes we are capturing (Screen Name, Token, TokenSecret, ConsumerKey, ConsumerSecret).

Demo of creating automatic tweet via workflow that includes page title (for a press release) and then a link back to the page automatically

Other Integration Scenarios

Leverage FAST Search for dynamic content  with the FB Graph API

Connect BCS to Twitter – use native UI and WebParts with twitter content

Federated Search of Twitter, YouTube etc

Use federated login to Live or Facebook – use native SP Claims Based Auth instead of using the FB Connect plugin. Allows for audience targeting or storing a rich SP user profile

Allows for back end LOB integration (ex CRM, SAP, etc)

Tips and Considerations

Do you need to own the conversation / content?

SharePoint’s Social Networking capabilities require login

Linking is tied to pages / uRLs – if pages change or are deleted conversation goes away

Performance of plugins – Facebook like button is at least 2 calls to Facebook. You could try to load JS asynchronously after pages load to keep down load.

Reference javascript in master pages or page layouts

Friday, November 06, 2009

SharePoint 2010 Likely To Offer App Store

This just in from ReadWriteWeb:

Microsoft will offer an application marketplace within Sharepoint 2010 that will integrate with third-party applications from its partner network. No date has been set for the marketplace lauch but it will evolve from "The Gallery" a feature that provides Sharepoint 2010 users access to templates…

Details are few about the application marketplace that will be offered through Sharepoint. But it does point to the increasing significance of third-party applications for the Sharepoint platform and how the service may evolve as cloud computing becomes more prevalent.

I was predicting this a few weeks ago on my “Things To Get Excited About in SharePoint 2010” post. Here’s what I had to say:

Service Application Architecture – the Shared Service Provider was a good idea but it was a bit hard to use in practice. Under the new architecture, you can create Service Applications for things like Excel Services, Forms Services, Business Connectivity Services, and other services that you build or buy, and you can mix and match these in your farms as you like. The services get consumed by web front ends via a standard interface.

This should allow a lot of plug-and-play customization of farms. I’m even wondering if there is an opportunity for vendors here…create some services and expose them to clients from the cloud.

There are some other big changes like Claims Based Authentication and Solution sandboxing which are intriguing to me. The Solution sandboxing feature gives me this sneaking suspicion we will one day soon see a Microsoft SharePoint App Store where we can buy, download and run SharePoint solutions in our farms.

Magic Eight-ball now says: “You may rely on it”.

Wednesday, November 04, 2009

Hosting Clockwork Web Framework With Amazon

I’ve blogged a lot about my admiration for Amazon’s web services stack. I think they understand the web as well as any company in the world. It’s always been my intention to investigate Amazon’s Electronic Compute Cloud (EC2) and since I needed hosting for my new Clockwork Web Framework, I decided to give it a try.

The reason I went with Amazon rather than a traditional hoster is that I have no idea what kind of interest there will be in the framework, and therefore cannot predict what the load on a web server will be. Amazon EC2 is designed for this kind of flexibility, and you pay per hour.

The Platform

I am running a small Windows Server 2003 32x server instance to begin with. It only has 1.7 gb of RAM. I can scale this up if I need to, or more likely I will run up another small instance and load balancing the two using Amazon’s Elastic Load Balancer technology.

On this, I am using IIS 6, .NET 3.5, SQL Server 2005 Express, and Powershell. Most of my files are kept on a permanent storage drive (more on this below) and served by IIS. In order to maximize the speed and lower the CPU burden on the server, I have decided to use another Amazon technology, CloudFront.

CloudFront Content Delivery Network

CloudFront is a Content Delivery Network (CDN), like Akamai or Limelight. I use it to serve my images and resource files. Basically Amazon has edge servers all over the world with a copy of my images and resource files, and when users request them from my website, CloudFront automatically sends them a copy from the nearest location to them, making for some very fast download times.

To make this work, you have to use Amazon Simple Storage System, or S3. This is a virtual file system. Basically you have “buckets” of files that are served up when requests come in from the CloudFront “distributions”.

I’ve optimized it a bit by having two distributions; one for images and one for resources. This means that a page which requires both things will load even faster since two parallel CDN distributions are processing the files at the same time.

You can create CloudFront distributions through code, or through Amazon’s web management portal.

Create CloudFront Distribution

Create CloudFront Distribution - Completed  Since you can control the public URL of the distributions, you will notice if you view the properties of my website that my images are handled by the path “http://images.clockworkwf.com” and my resource files are handled by the path “http://resources.clockworkwf.com” . In other words, I have full control over what path I give them. Most people will never know these picture are being served from Amazon.

I notice the website loads really quickly, so the CloudFront makes a big difference.

EC2 Hosting Challenges

So that’s the high level architecture. There are a number of impacts when using Amazon as a hoster I’d like to talk about.

Server Goes Up, Server Goes Down

To begin with, you have to assume that at any moment your server will go down. If your server dies, it vanishes, and you have to “spin up” another one, using the web interface or code. It’s very easy to do from the web console, just click “Launch Instance” and you can pick any server ranging from Ubuntu Linux to Windows 2003 Server 64x Enterprise R2.

Launching a new instance of ec2 With CloudWatch

Although the server instances you can use have their own hard drive space on C: and D: drives, you have to treat that as transitory storage.

I’ve setup my system in such a way that I can use an Elastic Block Storage (EBS) hard drive volume, provided by Amazon.This is a more permanent drive space that you pay for, but can be attached to any server instance. Think of it as a SAN (that’s probably what it is).

So I’ve got my database and web files on this EBS block, which I then mount to any server instance I’m currently running.

On the server instance, I simply point IIS web server to the EBS block files, and away we go.

The EBS can be any size you like, and you pay per GB per month. Right now I’m using 10GB since my log files and database don’t take up much room. I can add more space later if I need to.

Here’s a screenshot of that EBS volume, in the Amazon web console.

Allocate Elastic Block Storage Instance

Dynamic DNS Entries

Next problem: Since the server can go down at any moment, DNS is a problem. If my server dies and I spin another one up, it will be given its own IP address, which my DNS entry for www.clockworkwf.com wouldn’t know about. So there might be a long delay while DNS changes to the new IP address.

So, I’m using a Dynamic DNS service called Nettica. They have a management console where I can enter my various domain records and assign a short Time To Live (TTL), which means the DNS entries update frequently. So if my server dies, I can change the entry in Nettica to point to the new server’s IP address, and within a few seconds requests are going back to the right place.

Nettica even allows me to control all of this through C# code. Going forward I plan to write powershell server management scripts that can automatically spin up a new server on Amazon, determine the IP, and register that with Nettica.

Incidentally, Amazon EC2 allows you to buy what are called “Static IP Addresses”. Essentially you can “rent” a fixed IP address which can by dynamically allocated to a server instance. So, in the short run this makes life easier for me as I have rented one, used that for my Nettica domain name record, and can assign this fixed IP to any new server instance.

Allocate IP Instance

Next problem: Disaster Recovery.

Disaster Recovery is even more important in Amazon EC2 world than elsewhere, since again your instances could die at any moment….Not that they will, but the point is, they are “virtual” and Amazon isn’t making any promises (unless you buy a Service Level Agreement from them).

However, Amazon’s EC2 provides a level of DR by its very nature – you can spin up another machine in a small amount of time. Estimates for new Windows instances are about 20 minutes.

There’s also something called an Availability Zone. Essentially it means “Data Centre” – Amazon has several of these and so you can spread your servers around between US – East, US-West, Europe, and so on. So when that Dinosaur-killing comet hits North America, the Europe Availability Zone keeps chugging.

Right now I’m not really doing much with my database, so DR isn’t such an issue. I have some security since my files are on an EBS block. However, eventually I’ll setup a second server in another availability zone and load balance the two.

Another Challenge: Price

Amazon Web Services are flexible, and you are charged per hour, for only what you use. This is an amazing model but it doesn’t work so well for website hosting, because of course your servers are supposed to be online 24/7, 365 days a year.

It’s hard to tell for sure what the annual bill will be, but for my small server instance (remember, only 1.7 Gb of RAM) it will cost well over $1,000. That’s a lot more than shared space on a regular hosting provider. However I’m willing to pay this, for the flexibility I get, and also because I think Amazon web services are a strategic advantage and so the earlier I learn about them, the more business opportunities I might unlock.

One good thing is that Amazon has been aggressively dropping its prices as it improves its services. Additionally, they have started offering “Registered Servers” – basically a pre-pay option for 1-year, 2-year, and 3-year terms. Unfortunately these are only for Linux servers at the moment but hopefully they will add them for Windows and then I can save money year on year.

CloudHost Monitoring

Amazon offers a web-based monitoring option for its server instances. I’ve started using it (for an additional fee) but I’m not sold on its utility yet. I don’t think I’m using it to its full potential yet – it is supposed to help you manage server issues by monitoring thresholds.

ec2 Cloud Monitoring

Managing S3 Files Using Cloudberry Explorer

I needed an easy way to create and manage my buckets, CloudFront distributions, and S3 files. I found Cloudberry Explorer, and downloaded the free version of it. I was able to drag and drop 1600 files from my Software Development Kit to the S3 bucket where I’m serving the resources. Super!

There’s a pro version I might purchase which would allow me to set the gzip encryption and other properties on the files. This would help lower my bandwidth costs and speed up the transfer a bit.

Here’s a screenshot of Cloudberry in action:

Cloudberry Amazon S3 Explorer

I love how easy it is to setup and use Amazon’s web services stack. I think they have a great business model for the Cloud, and they’re the company to beat. I’m willing to rely on them for the launch of Clockwork Web framework and so far I haven’t been disappointed.

Sunday, November 01, 2009

Introducing Clockwork Web Framework for .NET

In 2003, I read a book, “Making Space Happen”, by Paula Berinstein. It’s about the efforts of entrepreneurs to open up space to the public. It’s the kind of thing that gets my propeller-head spinning, and after reading it I resolved to create the best website on space travel on the internet.

So, I sat down in a park and within two hours I had covered several sheets of paper with scribbles and scrawls of what my website needed. I had notes on authentication, web components, search boxes, themes, dynamic images, language toggles, and all kinds of stuff.

Being a good little programmer, the more I designed, the more intricate the design became, and pretty soon I was knee-deep in code. Flash forward six years later, and I have yet to write a single page of that space website!

But I do have a web framework :)

What It Is

Clockwork makes it easy to build powerful .NET web sites. It’s completely free, open source (under the Apache 2 license) and you can use it in proprietary or open source projects, as you like.

Some of the ways it makes web development easy:

  • Database-agnostic data access
  • Dynamically displays content in different languages
  • Leverages the .NET 3.5 framework, including the Provider Model, generics, LINQ, automatic properties, and more
  • Integrates with popular web services such as those provided by UserVoice, LinkedIn, Google and Yahoo!
  • Makes it really easy to use object-oriented programming standards like Dependency Injection / Inversion of Control, Repositories, and Specifications

Under the hood I use many popular components, including NHibernate for database access, Castle Windsor for Dependency Injection, and log4Net for logging.

Although today marks the official public release, the framework is currently at version 3.x because I’ve been using earlier versions of it in production websites since 2004.

I’ve built Clockwork using as many web standards as I can find, as many of the latest .NET elements as possible, software best practices, and a lot of love and stubbornness.

What It Will Become

Well, it’s obviously too early to say. But I am committed to continuing to develop it, I have a long list of things I plan to add, and I’m hopeful a community of .NET developers will adopt it and push it into areas I can’t even imagine today.

Please take a minute to visit the website and learn more about it. I hope you find it helpful.

Many thanks,

Nick

Tuesday, February 17, 2009

Above the Clouds: A Berkeley View of Cloud Computing

The Electrical Engineering and Computer Sciences department at the University of California at Berkeley just put out a research paper on Cloud Computing as they see it.

The paper is an in-depth exploration of what some consider to be just another buzzword, Cloud Computing. Since nobody has agreed on what exactly it means, the implication is that it's just a marketing term.

I remember when web services started to appear, around 2000/2001 if I recall correctly. The descriptions and possibilities seemed great, but nobody really knew what to do with them or why. So there came a time when nobody talked about web services anymore and it looked like that particular bubble had burst.

In fact, behind the scenes, a host of companies and individuals were figuring web services out, building their own, and releasing them. A couple of years after the term started popping up, web services arrived for real and now we have mashups and SaaS and Software + Services and some really well-traveled XML fragments zipping around the globe.

The same thing seems to be going on with Cloud Computing. We're in the early days, and still hearing the "Moon on a stick" promises that Cloud Computing is a silver bullet for everything.

This white paper is one of the first I've seen that really quantifies the (potential) cost savings of Cloud Computing.

Some gems:

  • Explanations on Cloud Computing and how it differs from previous attempts;
  • Classes of Utility Computing on page 10, comparing Google AppEngine, Amazon Web Services, and Microsoft's beta Azure platform;
  • Cloud Computing economic models, on page 12;
  • A discussion of the Top 10 challenges- and potential solutions to them - on page 16;
  • The observation that FedExing your data is a good way to cut down on your bandwidth costs and delays.

This is very impressive work. The full paper is here:

http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf