The PDC is just about over, the final sessions have finished and the place is emptying rapidly – the third day has included a lot of good information about SQL Azure, the progress made to date on it as well as the overall direction – including a new announcement by David Robinson, Senior PM on the Azure team about a project codenamed ‘Houston’ .
During the sessions today the 10Gb limit on a SQL Azure database was mentioned a number of times, but each time was caveated with the suggestion that this is purely the limit right now, and it will be increased. To get around this limit, you can partition your data across multiple SQL Azure databases, as long as your application logic understands which database to get the data from. There was no intrinsic way of creating a view across the databases, but it immediately made me consider that if you were able to use the linked server feature of the full SQL Server, you could link to multiple Azure databases and created a partitioned view across the SQL Azure databases – got to try that out when I get back to the office but I do not expect it to work.
SQL Azure handles all of the resilience, backup, DR modes etc, and it remains hidden from you – although when connected to the SQL Azure database you do see a ‘Master’ database present. It is not really a ‘Master’ in the same way that we think of one, and it quickly becomes apparent how limited that version of the ‘Master’ really is – it exists purely to give you a place to create logins and databases. It could have been called something else to make it a bit clearer but one of the SQL Azure team said it was to keep compatibility to other 3rd party applications that expected there to be a master.
SQL Azure supports transactions as mentioned before, but given the 10GB limit currently on a database you will be partitioning your data across databases. That will be a problem, because the system does not support distributed transactions, so any atomic work that is to be committed on multiple databases at once it going to have to be controlled manually / crufted in code, which is not ideal and a limitation to be aware of.
Equally cross database joins came up as an area with problems – they can be made, but it appears there are performance issues – interested to start running some more tests there and see whether you can mimic a partitioned view across databases using joins. The recommendation was to duplicate reference data between databases to avoid joins, so lookup tables would appear in each database in effect, removing the cross database join.
On the futures list:
- The ability to have dynamic partition splits looked interesting, regular SQL server does not have this facility within a partitioned table – so if Azure can do it across databases then this might come up on the SQL roadmap as a feature – that could be wishful thinking.
- Better tooling for developers and administrators – that is a standard future roadmap entry.
- Ability to Merge database partitions.
- Ability to Split database partitions.
So SQL Azure has grown up considerably and continues to grow, in the hands-on-labs today I got to have more of a play with it and start testing more of the subtle limitations and boundaries that are in place. Connecting to an azure database via SQL Server Management Studio is trivial and the object explorer contains a cut down version of the normal object tree, but includes all the things you would expect such as tables, views and stored procedures.
Some limitations of the lack of master and real admin access become apparent pretty fast, no DMV support, no ability to check your current size. No ability to change a significant number of options, in fact, the bulk of the options are not even exposed.
Two of my personal favourites I took an immediate look at, maxdop and parameterization.
- Maxdop is set at 1, although you can not see it, and attempting to override it throws an error from the query windows, telling you that it is not permitted. Do not plan on parallel query execution, you will not get it.
- I attempted to test the query parameterisation using the date literal trick and it appeared to remain parametrized, as though the database is in ‘forced’ parameterisation mode, so is more likely to get parameter sniffing problems but I have not been able to concretely prove it as yet, but the early indication is the setting is ‘Forced’
One other interesting concept was that a table had to have a clustered index, it was not optional if you wanted to get data into the table, although is did not stop me from creating a table without a clustered index, I had not attempted to populate data into it to see this limit in action – a case of too much to do and so little time.
On one of the final talks about SQL Azure, David Robinson announced a project codenamed ‘Houston’ – (there will be so many ‘we have a problem’ jokes on that one) which is basically a silverlight equivalent of SQL Server Management Studio. The concept comes from the SQL Azure being within the cloud, and if the only way to interact with it is by installing SSMS locally then it does not feel like a consistent story.
From the limited preview, it only contains the basics but it clearly let you create tables, stored procedures and views, edit them, even add data to tables in a grid view reminiscent of Microsoft Access. The UI was based around the standard ribbon bar, object window on the left and working pane on the right. It was lo-fi to say the least but you could see conceptually where it could go – given enough time it could become a very good SSMS replacement, but I doubt it will be taken that far. There was an import and Export button on the ribbon with what looked to be ‘Excel’ like icons but nothing was said / shown of them. Date wise ‘Targetting sometime in 2010′, so this has some way to go and is not even in beta as yet.
So that was PDC09, excellent event, roll on the next one!
The PDC has been an amazing place to be today, the buzz and excitement generated from the keynote this morning has permeated the entire convention centre and understandably so – this is primarily a conference for IT people and of course what is the best way to get IT folk on board? give them some hardware, usb sticks, usb drives, t-shirts – but Steve Sinofksy and Microsoft went one better this morning in the PDC keynote.
You could sense something was coming in that they were going through a number of netbook/ laptop devices talking about how they have learnt more about how the hardware is constructed and used. This led them to the creation of a kind of reference laptop device with it all built-in, so it became an ultimate development platform. I was half expecting something along the lines ‘and we are offering this to you today for a discount of… ‘ – since it is was clearly a very nice device that was being shown to the crowd, the spec we now know is:
- Acer Laptop
- Dual core Celeron U2300 chip
- 2 Gig of ram,
- 250 gig hard disk,
- win 7 ultimate 64 bit, preinstalled with office 2010 beta.
- Tablet style PC with touch screen
- 1366×768 resolution, Intel GMA 4500 MHD graphics
- Webcam, 3G Sim Card support, HDMI output, built-in memory card reader.
It manages to score 3.2 on the Windows Experience Index, which is pretty impressive for a semi netbook style laptop, the score is understandably pegged by the graphics performance.
What I did not expect and the hordes went wild at, was the fact that he said – “today we are giving you one of these laptops, for free” – queue complete madness.
But to be fair, there is a considerable amount of buzz from the announcements and features being demonstrated, I have to confess that they are not SQL / Data related which is of course my passion , but they are worth mentioning:
- Silverlight 4 – Entered beta today, and can be downloaded, release expected first half of 2010, so expect Mix010 to contain the release announcements on that.
- Silverlight 4 feature set for has been pumped up in all the key areas the technology was lacking, print support, context menus, access to media devices such as webcams and audio, drag / drop, rich text support, clipboard access,…. the list goes on.
- Office 2010 beta is now available and can be download, powerpivot (what was codenamed ‘Gemini’) is now available to all.
- Visual Studio 2010 beta is now available, which brings along a whole host of templates for all these new features.
- Sharepoint 2010 beta was released today and the integration between the development surface and the Sharepoint site looks to be a consistent story and got cheers from the Sharepoint developers in the audience. (I kind of feel sorry for the Sharepoint presenters and demos, they followed on the heels of Steve and then Scott, who had just made the most significant announcements for the conference and given away the best ‘goodie’ you could get, how can you follow that?
So did this leave anything for the database side of my passion?
Well yes, in a round about way, what is interesting is that the Silverlight is extending to include a trusted model which gives you a far wider access to the underlying OS and this starts bringing it into the realms of local data consumption. They have also allowed calls into the older COM object models to be made from within Silverlight when running in a full trusted mode, this means that technically, Silverlight can make direct calls into the database via the COM ADO libraries, instead of using the system.data namespace and using ADO.Net. Up until now there has been no availability for the platform to connect to a SQL server directly, but this provides a very round about way in which to do it.
That seems puzzling as to why you would allow that scenario but not give SL some form of direct access into the database itself – at the ask the experts session later in the day we posed the question as to whether a proper data access technology for connecting to SQL Server was being included and the answer indicated there would be something there to do it, but no specifics were mentioned. Also managed to spend some time in the big room and chat to some of the SQL guys at the booth as well as the patterns and practices team. I want to get a chance to go chat more to the SQL Azure team but will have to wait until tomorrow to fit that in.
As promised, I wanted to only blog about the bits of the PDC that relate to SQL / Database / Data Services, and not every session within the PDC that I am attending. Many of the sessions have been interesting, but I am viewing them with my Architect’s hat on, and not from the viewpoint of my personal passion for SQL Server. I feel fortunate to be here and listening to the speakers and chatting to them offline instead of watching the PDC on the released videos after the event.
The keynote today contained a number of very interesting looking prospects on the data side of the fence, primarily ‘compered’ by Ray Ozzie, Chief Software Architect at Microsoft. There were also some demos, some of which were quite good, whilst others suffered from over-scripting. I am sure twitter was going wild at times during the keynote as people were giving real-time feedback about what they thought. (Whether that is a good thing or not I am not sure, walking off stage to find a few hundred bad reviews can not be nice.) But this is not about the demos but about the SQL / Data stuff.
A lot of work Microsoft have been doing and the phrase repeated throughout was ’3 screens and a cloud’, using the 3 screens of mobile, computer and tv to represent 3 different delivery paradigms, but fundamentally using the same technology stack to deliver all 3.
The Azure data centres were announced to be going into production on Jan 1st 2010, and billing for those services will commence on the 1st Feb. However, the European and far eastern data centres were not listed as coming online until late in 2010, so the only data centres that will be up and running will be the Chicago and San Antonio data centres.
This may not seem a big problem, and in fact having 3 pair’s of data centres around the world is far more ideal and a single centralised resource, but for Europeans there are data protection laws in place that prohibit the movement of personal data outside of the bounds of Europe. In effect, you may not move the data into another jurisdiction where the data laws remove the legal protection the data subject owns. So from a data angle, it will be more interesting when the Dublin / Amsterdam data centre comes online in 2010, at which point storing data in the Azure cloud has a better data protection story.
SQL Azure has clearly been ‘beefed’ up and can now be connected to via SQL Server Management Studio just like a normal database, and be administered / interacted with – even supporting transactions. The disaster recovery and physical administration of the SQL remains out of sight and handled by the cloud, and not the application / vendor. SQL Azure understands TDS, so connecting to the SQL Azure is pretty seamless and appears like a regular SQL server. It has clearly matured as a platform, and rightly so.
Another project, codenamed ‘Dallas’ was announced which forms part of pinpoint. Pinpoint is a products / services portal, which instantly made me think of Apple’s ‘AppStore’ but for windows products and companies offering services. The interesting part is the ‘Dallas’ section, which is something like a ‘Data Store’ – allowing the discovery and consumption of centralised data services.
There has always been an issue when consuming data from other sources, that you are required to download it, understand the schema of the data and often ETL it from the format it is being supplied in, such as CSV, XML, Atom etc into a format that you can work with. Each data source often has its own schema and delivery mechanism and handling updates to the data remains an operational issue.
With ‘Dallas’ you are buying into the data being held within the cloud and it will auto-generate the proxy class for the data being consumed, so the schema of the data is available to you within code on the development side. This is an awesome concept and if they can tie in some form of micro-payment structure, you could easily visualise a set of data services that you consume within an application on an as needed basis. Without the micro-payments, you would have to have purchased a license, whether that is a one off cost, or a monthly subscription, neither deals with the ‘elastic’ nature of the applications that are being placed onto the cloud and one of the key benefits in that the data centres can scale up / down as your apps require. Given the billing of that is based on usage and you specifically want to take advantage of the elasticity of the infrastructure provision, it would make sense to have a similar elasticity in the data service charging arena.
This is definitely a technology to keep a close eye on, and I will be signing up an account to get access to the free data services that they are going to expose.
I am at the PDC this week in Los Angeles – the session selection is massive and covers a wide range of topics, although there are only a few sessions involving SQL Server. I think this is a testament to how important the PASS conference is now for SQL Server.
If I fit a SQL session in I will blog about it, or any major announcements, but I will try keep this purely about SQL since that is my passion.
PDC is a great technology event as well as a good networking event, so shoot me a comment if you want to try locate me in the mass of people roaming the halls.