New Blog Site (October 10, 2011)

Thanks for stopping by and checking out the new blog site.  Over the next few weeks I will be moving content here, so any previous information you have seen will available again soon.  Keep checking back!

Posted in Site Messages | Leave a comment

SQL Server Breaking Changes and Deprecated Features (2012, 2008 R2, 2008, 2005, 2000)

In a recent discussion, the issue of breaking changes in SQL Server 2012 came up, and I realized that no one in the room had a thorough knowledge of the breaking changes and discontinued features.  Since we were trying to determine a problem was a bug or a discontinued feature, this was important.

That made me realize that I had not seen all of this information gathered into a single location, so I decided to get all of that information and post it here.  The information on SQL Server 2012 will change through Release Candidates and RTM, so I will update this article with those links as that information becomes available.

There are 4 categories of information listed below:

  • Deprecated features – these are features that are scheduled to be removed in a future release of SQL Server, but that are still in the product.
  • Discontinued functionality – this is functionality that worked in a previous version, but will no longer work with the version listed.
  • Breaking changes –  these are changes to the behavior of SQL Server that may break your code when you upgrade from a previous version.  For example, if a function required additional parameters, this would be a breaking change.
  • Behavior changes – behavior changes are changes that affect how features work or interact with SQL Server.  Optional parameters or changes in type conversions could be behavior changes.

SQL Server 2012 (RC0):

SQL Server 2008 R2:

SQL Server 2008:

SQL Server 2005:

SQL Server 2000:

Posted in SQL Server | Tagged , , , , , , , , , | Leave a comment

ServiceU is live with SQL Server “Denali”!

Today we went live with SQL Server “Denali” for our Tier-1 systems.  We did it with only 3 minutes 45 seconds of downtime, and zero problems.  We will make up that downtime between now and the end of the year with the improvements in the online indexing feature alone!

Posted in SQL Server | Tagged , , , , | Leave a comment

sp_server_diagnostics

SQL Server “Denali” has introduced a new way to capture SQL Server health and diagnostic information with the sp_server_diagnostics stored procedure:

sp_server_diagnostics [@repeat_interval =] 'repeat_interval_in_seconds'

The stored procedure runs in “repeat mode” and periodically returns results.  Specifying 0 as the @repeat_interval causes it to run only once.

This is a great way to get a lot of diagnostic information very quickly.  The procedure returns five columns and multiple rows.  The columns are:

  • creation_time – the time stamp of the row creation
  • component – what data is being returned (system, resource, query_processing, io_subsystem, events), discussed below
  • state – the health status of the component
  • state_desc – a description of the health status.  The “events” component will always return a value of 0 (Unknown).
  • data – the data specific to the component; this is the real “meat” of the procedure

The five components contain a ton of great data.  Here is a brief description of the data that is collected about each component:

  • system – spinlocks, severe processing conditions, non-yielding tasks, page faults, and CPU usage
  • resource – physical and virtual memory, buffer pools, pages, cache and other memory objects
  • query_processing – worker threads, tasks, wait types, CPU intensive sessions, and blocking tasks
  • io_subsystem – data on IO
  • events – data on the errors and events of interest recorded by the server, including ring buffer exceptions, ring buffer events about memory broker, out of memory, scheduler monitor, buffer pool, spinlocks, security and connectivity.

You can execute this query interactively and view the results on the screen, or you can save those results to a table or a file (see Books Online for details).  Microsoft recommends creating an extended session to save the output to a file so that you can still access it outside of SQL Server if there is a failure.  For systems that are not suffering problems that dramatic, you can capture the data to a table and query it easily.

This is a great way to get a quick snapshot of the system to narrow down the problem quickly.  If you work with remote clients that call you when there is a problem, this is also a very quick and simple way to have them gather information for you.  Thank you, Microsoft, for making this data so easy to capture!

Detailed information about sp_server_diagnostics is available in Books Online.

Note:  this information is based on pre-release documentation.  I will update this post and the links as new “Denali” versions are released.

 

Posted in SQL Server | Tagged , , , | Leave a comment

SQL Server “Denali” CTP3 Features (Programmability and Other Features – Part 3)

In the first two posts on CTP3 features, I discussed the new AlwaysOn and Security features in SQL Server “Denali” CTP3, and then the Business Intelligence and SSIS/Data Tools features.  This post is about programmability enhancements and miscellaneous features that do not fall into other categories.  As before, these are not all of the features included in Denali, only the ones that I think will be of most interest to a wide audience, or features that I am excited about for one reason or another.

Programmability

There are many programmability enhancements in Denali, but here are a few that I think are worth noting:

  • SQL Server   (code-named “Juneau”) – Until now, SQL Server development revolved around a mixture of Visual Studio, SSMS and third-party tools, and even then, often fell short of what database developers need on a daily basis.  Microsoft has worked to address these needs with the implementation of Juneau.  Juneau is intended to be the single platform for developing and deploying SQL Server code – including source control, DML and DDL, BI, SSIS, and both on-premise and SQL Azure.  It includes declarative, mode-based development, the ability to perform version-based targeting, improved development experiences for SSIS developers and more.  The last version I used was a private build not ready for public use, but even so it had some pretty incredible features.  I think that starting with Denali, database developers will begin to shift more and more of their development to Juneau.  One of my few complaints is that it does not have full support for cross-database functionality, but Microsoft is aware of that limitation and I expect to see at least some of those limitations addressed in future versions of the tool.
  • PowerShell 2.0 Support – extended support through PowerShell 2.0 and SQLPS.exe allow much more power and flexibility in automating SQL Server tasks.
  • Contained Databases (CDBs) – While this is not specifically a programmability improvement, it does greatly affect the software life-cycle, and therefore all database developers.  The real limitation to CDBs is that they are currently designed around an application living in a single database, but the idea is that all functionality around the application (from the database perspective) will be contained within a single database (CDB).  That is, all users, jobs, data, et cetera will live inside a single contained database.  That CDB can then be moved from server to server without having to script and redeploy items like logins.  CDBs is definitely a step in the right direction, and if you have an application that uses a single database, you should definitely look at CDBs.
  • ANSI OFFSET / FETCH – This is for result set paging and can really simplify your life if you have previously had to deal with complex code just to implement paging from your application.
  • Sequence Generators – with sequence generators, you can create identity-like values instead of or in addition to the identity columns.  The CREATE SEQUENCE statement can be used to create global sequence generators in integer types.  The integer types can be built-in or user-defined, and you can specify options such as START WITH, INCREMENT BY, MINVALUE, MAXVALUE, CYCLE/NO CYCLE, CACHE/NO CACHE.  This feature is very powerful and is not something that can be discussed in a just a few sentences, but Sequence Generators may dramatically change how you write code, so definitely look them up in BOL and play around with them!
  • Improved Error Handling – Error handling finally works like a real programming language in SQL Server “Denali”.  Here are a few of the summary items that make error handling dramatically better:
    • Abort statement (different from severity level now!)
      • Statement Abort
      • Scope Abort
      • Batch Abort
      • Transaction Abort
      • Connection Abort
    • THROW/ CATCH/ re-THROW – this all actually works now and works like you would expect.  The functionality is very complete and does not automatically use sys.messages.  The THROW statement allows <number>, <message> and <state>.
  • New Scalar Functions – these include new conversion functions like TRY_CONVERT() and TRY_CATCH(), conversions to and from strings [FORMAT(), PARSE(), TRY_PARSE()], and other miscellaneous functions [IIF(), CHOOSE(), CONCAT()].  In addition, there are new date and time functions:  EOMONTH(), DATEFROMPARTS(), TIMEFROMPARTS(), DATETIMEFROMPARTS(), DATETIME2FROMPARTS(), and SMALLDATETIMEFROMPARTS()
  • Spatial Data Improvements – a number of improvements have been made to spatial data functionality in Denali, including:  circular arcs on the ellipsoid, support for full globe spatial objects, functional parity of geometry and geography data types, and spatial index performance improvements.

Other Features

There are several features which don’t really fit into a single category, so I have put them in the “Other Features” bucket.  Some of these are great features though, so don’t think that because I left them to last that they are insignificant!  Explore these new features and I think you will find that there are multiple features that may be of use to you.

  • FileTable – SQL Server 2008 had the ability to use Filestream to add unstructured data to SQL Server.  Filestream was a little difficult to use effectively, and required code to process your files and put them into SQL Server.  In addition, it was not supported with database mirroring (DBM).  FileTable completely changes the way that you can use SQL Server.  With FileTable, you can expose an SMB share to Windows and simply drop files into the share and use that share exactly like you would any folder within Windows – you can create files, folders, delete them, move them, etc.  In reality, when you do this you are actually modifying SQL Server and all of the files are stored in SQL Server and are searchable.  On the SQL Server side, you can query or modify the file properties and attributes, change the parent folder or delete any of the files or folders.  As one more added benefit, it is now fully supported by Availability Groups (AGs), so you can keep a copy of the files on your secondaries as well.  This feature is everything about SQL Server and nothing about it at all.  Using this feature alone, you can look at any non-SQL Server application and see a way to use FileTable and SQL Server.  Simply take the application’s files, and drop them into FileTable and you instantly have full database and AG access to all of that data to query, change, move offsite, etc.  While this is not typically the way that people view SQL Server, I think this feature has the potential to drastically change our file systems.
  • SQL Server 2012 Express Local Database Runtime (LocalDB) – if you have ever used SQL Server Compact Edition for a lightweight database for an application, you have probably been pretty frustrated by the lack of programmability support.  SQL Server 2012 LocalDB is a lightweight version of SQL Server Express, is a zero-installation version (like CE), but does support all of the programmability features of Express.
  • Server Core support – SQL Server can now be installed on Windows Server Core.  This might be important to you for security, licensing or other reasons.  I think that many companies will be excited about the ability to install SQL Server on Windows Core.
  • Partitioning – until SQL Server 2008 SP2 or R2, you could only have a maximum of 1000 partitions.  For many deployments, including data warehouses, this limit was many times a difficult barrier.  In SP2 and R2, SQL Server began allowing up to 15,000 partitions, but only after running sp_db_increased_partitions on each DB, and even then, it altered the DB version, possibly creating you problems if you tried to restore the database on another server.  With SQL Server Denali, all of these quirks go away and the product natively supports up to 15,000 partitions.
  • Distributed Replay – SQL Server has historically been very limited in its ability to capture and replay a workload for testing purposes.  SQL Server Denali introduces the Distributed Replay Utility (DRU) which allows you to capture a workload and then play it back with multiple “clients”, specifying parameters for each.  This is a HUGE leap forward for anyone that needs to test performance or a new system.  I have used DRU quite a bit and it has been absolutely critical in our testing and deployment of SQL Server Denali.

At this point I have given a pretty good overview of all of the features.  In the coming weeks I will deal with more specifics on many of these and go into much more depth about how they are used and what it means to you.

Posted in SQL Server | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

SQL Server “Denali” CTP3 Features (Business Intelligence and SSIS/Data Tools – Part 2)

In my last post, I discussed the new AlwaysOn and Security features in SQL Server “Denali” CTP3.  This post is about some of the Business Intelligence (BI) improvements, and changes in SSIS and Data Tools.

Business Intelligence

I am going to focus on three different areas of BI changes that I think are notable:

  • Columnstore – if you are a heavy OLAP user, you will probably be pretty excited about Columnstore, and this may be THE selling feature of the product for you.  Without going into b-trees, vectors, segments and dictionaries, here is the bottom line:  you can probably see speed improvements of hundreds to thousands of times by using columnstore.  Columnstore derives its name from the fact that the columnstore index stores the data in columns instead of rows.
  • Project “Crescent” – Crescent is such a huge addition that it is hard to describe without a full week’s worth of posts.  Imagine having a BI tool with an Office-like interface, data that is easy to find and use with just a few clicks, the ability to navigate your model, see table subtotals, totals and blocking, matrix subtotals, grand totals and blocking, Excel-like charts and graphs, motion charts, interactive models and the ability to filter with sliders, calendars, lists or custom parameters.  All of that and much more is included in Crescent.  Crescent seems to be a project where the SQL Server team took a step back and said, “What should BI really look like?”.  I think you will be pretty excited about Crescent.
  • Self-Service BI – Microsoft has continued down the road of self-service BI, and has enhanced tools like PowerPivot to support KIP’s, Top N, new financial and statistical calculations, and perspectives.  While the full functionality continues to rely on Sharepoint, the fact that your end-users can work on the data without IT’s involvement continues to be the selling point.

SSIS / Data Tools

There is really a lot of work that has taken place in Denali around data quality, moving data between systems and data modeling.  For lack of a better way to organize this data, I have lumped it all together under “SSIS / Data Tools” because to me, most of these features are related and will be used by the same groups of users.  All of this falls under what Microsoft is calling “Enterprise Information Management (EIM)”.

  • Master Data Services (MDS) – gone are the days when creating lookup or fact tables from multiple systems involved iterations of importing from Excel, showing the user, exporting back to Excel, letting the user clean up the data and then starting over.  MDS allows multiple to collaborate on the data, even using tools like Excel, while not losing data consistency.
  • Data Quality Services (DQS) – Along with MDS, DQS allows you to cleanse and scrub your data using a Data Quality Knowledge Base (DQKB).  While that sounds daunting, getting up and running with a DQKB can be done in just a few minutes.  If your business could benefit from more consistent data, DQS can help you achieve those goals both now and into the future as you work with new data.
  • SSIS – SSIS has been enhanced in many ways, including an improved UI, toolbox and undo/redo functionality.  In addition, there are built-in troubleshooting reports, simplified deployment, and a better packaging, which helps with version control.  The changes in SSIS are substantial and are another area that is worth a deeper dive here in the coming months.

If you are a BI user or work with SSIS or Data stores, I think you will find quite a few features in Denali that can help you with your job.

Check out Part 3 for Programmability and Other enhancements in CTP3.

Posted in SQL Server | Tagged , , , , , , | Leave a comment

SQL Server “Denali” CTP3 Features (AlwaysOn and Security Features – Part 1)

Now that CTP3 has been released to the public, I can start to blog about a huge list of content that I have been working on related to Denali.  Although this is technically a post about CTP3 features, CTP3 is very close to what I expect the final product to be, with only a limited set of enhancements from here until RTM.

This post is about features.  After these three posts, where I briefly discuss some of the new bells and whistles of the product, I can start to do some deep dives into functionality and configuration.  I have divided the features up into the following categories:

  • High Availability / Disaster Recovery
  • Security
  • Business Intelligence (BI)
  • SSIS / Data Tools
  • Programmability
  • Other

I will discuss the new features in that order.  I have devoted a significant amount of time to working with the SQL team on the HA/DR (AlwaysOn) features of Denali, so of course that gets put at the top of the list.  🙂

High Availability / Disaster Recovery (“AlwaysOn”)

Over the course of the last two years, we have referred to the high availability features as “HADR”, “HADRON”, and “AlwaysOn”.  From here on I will simply call these features “AlwaysOn”.  AlwaysOn incorporates these new features:

  • Multiple secondaries (multiple copies of the data, similar to the older concept of a “mirror”).
  • Readable secondaries!!  We can now run queries against a secondary without jumping through hoops with snapshots and views and other tricks.  Just query the database and it will work – no more work needed.
  • Backup on secondary – you can now run backups on your secondary to isolate the workload from your primary system, or to store the backups in a different location, like your DR site.
  • Availability Groups – With SQL Sever 2005 and 2008, we had database mirroring (DBM), which allowed you to set up a mirroring relationship at the database level.  With Denali we have Availability Groups (AGs), which allow you to set up a data movement relationship with multiple databases in the availability group.  This more accurate describes most applications, and allows you to fail over the AG (and therefore all of its DBs) together.  You can have up to four secondaries and up to two synchronous secondaries
  • Automatic page repair – this is the same functionality that we had with DBM, but it has also been included with AGs.  If a data page is corrupted, and this is detected when a query runs, SQL Server will automatically get an uncorrupted copy of the data page from a secondary copy, and fix the corrupted page.  It’s magic, and it truly works that easily, with no user interaction.
  • Online Indexing of LOB data – to me this is one of the most exciting new features.  With SQL Server 2008, we were not able to do online indexing on tables that contained LOB data.  That meant that we had to be much more careful about when we used Reindex, Reorganize, or just skipped the index.  For availability, we had to error on the side of not rebuilding often enough.  Now, we can rebuild all indexes online, even if they contain LOB data!  The only exception to this are XML indexes and deprecated datatypes:  text, ntext and image.  This means higher availability for everyone out-of-the-box, simply by selecting to rebuild the index online!  With our SLA requirements, we look for ways to save seconds per yer.  This feature saves us minutes – about half of the budgeted planned downtime, so it is truly an incredible feature.
  • Add Columns with Default Value – Prior to Denali, you could add a new NULLABLE column without incurring any downtime, but could not add a column with a default value.  Now, with Denali, you can add a column with a default value and it is simply a metadata operation until certain things like an index rebuild happen.  This is another feature that has been requested for years and Microsoft has delivered it in Denali.
  • Hyper-V and Live Migration – With Denali you can now deploy SQL Server as a virtual machine and use Hyper-V Live Migration to move the server between one host and another with zero downtime.  This means that for patching the host, it is not longer necessary to incur the downtime associated with failing over a node of the cluster.  If your application can run as a VM, you now have the ability to achieve higher uptime.
  • Dashboards – Multiple new dashboards give you an instant view of your system and how it is functioning.  This can help you quickly find and eliminate problems, or give you the assurance that everything is working correctly.
  • DMVs and XEvents– there a multiple new DMVs and XEvents related to AlwaysOn that allow you to creatively manage your system and raise alerts on events that are crticial to you and your environment.  This is worthy of an entire post, so I will dedicate one to this topic in the next few months.
  • Server Core Support – Denali also lets you deploy SQL Server on Windows Server Core.  Depending on the features you have installed, UI security updates may make up as much as 67% of the total patches that you apply.  Be reducing the surface area, you can reduce the number of patches you need to apply, therefore reducing the number of reboots or failovers.

Security

There are several new security features which are compelling.  I will briefly mention them here, but you should definitely look into these features in the product.  I think you will find some features that can both save you headaches and enhance your auditing.

  • Apply Default Schema to Group – this is a great new feature, and one of the highest requested security features in SQL Server.  Database scheme can now be linked to a Windows Group instead of individual users.  While this is being billed as a way to help with compliance, to me the real win is the lack of administration overhead and the ability to prevent errors by assigning the schema to the incorrect users.
  • Auditing functionality now works on all SKUs – the sku-specific limits for auditing are now gone.  No matter what version of SQL Server you use, you will be able to implement the auditing features.
  • User-Defined Auditing and Filtering – you can now customize the audit information and remove unnecessary content from being entered in your audit logs.  See sp_audit_write in Books Online for details on the new functionality.
  • Stack Info – enhancements have been made to see the full stack information for 3-tier applications.  Typically all you can see is the middle-tier, that usually has a limited number of connection strings.  Now, with the enhanced functionality, you can trace the call all the way from the calling application.
  • Contained Databases (CDBs) – If you have single-database applications, you will be excited about contained databases.  CDBs encapsulate almost everything about the database into a single container.  For example, all of the database logins live inside the CDB, not stored in system databases.  This makes moving the database to new instances or back and forth between environments very simple.
  • Crypto Enhancements – enhancements have been made for increased functionality and to support newer algorithms with higher security.

You can see by the list of just AlwaysOn and Security features in Denali, that this version of SQL Server is packed full of new stuff.  As I mentioned in a post earlier this year, you need to start spending some time working with the CTPs so that you get up-to-speed quickly.

Check out Part 2 for BI and SSIS/Data features.

Posted in SQL Server | Tagged , , , , , , | Leave a comment

SQL Server “Denali” Feature Pack

In addition to CTP3 of SQL Server code name “Denali”, Microsoft has also released a feature pack (http://www.microsoft.com/download/en/details.aspx?id=26726) that is full of useful tools that work with CTP3.  Some of the features I find most exciting in the feature pack:

  • Master Data Services Add-in for Excel – this add-in allows multiple users to concurrently work on the master data via Excel, and click to publish back to the MDS database – all without worrying about the data integrity.  One of the big wins here is that IT resources don’t have to be consumed with scrubbing data.  Instead, the data owners can work to find the problems and fix them.
  • Report Builder – this is a report-authoring tool designed for the more technical user, and leverages the functionality of reporting services.
  • Microsoft System CLR data types – this is a package that allows applications to use the SQL Server “Denali” geometry, geography and hierarchy ID types outside of SQL Server.  If you are an application developer and are interested in these new features, you will probably be excited about this package.
  • Books Online – the revised “Denali” books online contains the new features of Denali, so even if you are not ready to install Denali, BOL can help you get up-to-speed on some of the changes in this release.
  • “Denali” upgrade advisor – No matter what version you are currently using, you should always run the upgrade advisor to prepare for the next SQL Server version.  Run this now on your existing databases to begin making any necessary changes prior to RTM.  By doing this now, you will be ready when Denali is released.
  • Service Broker External Activator – The Service Broker External Activator allows you to take code that uses service broker and move the processing logic to an application outside of SQL Server.  This makes so much sense and is what we have done for years with the N-tier architecture forward.  Now you can move CPU intensive or long-running tasks outside of SQL Server and put them on a different server and even running them under a different security context, while keeping the functionality of Service Broker.  This is an incredible addition to the product!
  • PowerShell Extensions – if you haven’t figured out by now, PowerShell is the wave of the future – and the present.  If you would like to build powerful scripts or full featured applications to manage SQL Server, the PowerShell Extensions cmdlets will allow you to do that.
  • Shared Management Objects (SMO) – if you have current applications that use SMO, you will want to integrate this new object model into your code so that you can take advantage of SQL Server “Denali”.
  • …and more.  I have highlighted the features that excite me the most, but your environment may be very different and you may be excited about some of the other aspects of this feature pack.
Posted in SQL Server | Tagged | Leave a comment

SQL Server Code Name “Denali” CTP3 Released

Today Microsoft announced the release of SQL Server “Denali” CTP3, available at https://www.microsoft.com/betaexperience/pd/SQLDCTP3CTA/enus/.  CTP3 contains most of the features that will be released at RTM, so download the bits today and start getting familiar with SQL Server “Denali”.

Now that CTP3 has been released, I can discuss many of the new features that were previously covered under NDA.  Check back here in the coming weeks for more information on new features!

Posted in SQL Server | Tagged , , , , | Leave a comment

Clustering setup error: “There was an error creating, configuring, or bringing online the Physical Disk resource (disk) ‘Cluster Disk 1′”

In March, 2011, Microsoft added asymmetric storage support for Windows Server 2008 and 2008 R2 failover clustering.  That means that not all nodes in the cluster have to share the same storage; some nodes can use one set of storage, and some nodes can use another.  A likely scenario for this is in a multi-site cluster, where each site has its own storage array.

When adding nodes in a cluster that has asymmetric storage, you may see an error similar to “There was an error creating, configuring, or bringing online the Physical Disk resource (disk) ‘Cluster Disk 1’ “.  You may see this error in several different locations, but generally you will first see this on the report screen at the end of the Add Node wizard, where it will appear similar to this (click for full-size image):

The reason for this is logical, but not obvious, nor is the solution obvious.  First, and most importantly, the nodes were added successfully and there is no problem – you just can’t use the disks yet.  Do not remove the nodes and go through the Add Node process again.

Explanation

Starting with Windows Server 2008, when you expose disks to the cluster, those disks are placed in a hidden cluster resource group called “Available Storage”.  Any of your “Services/Applications”, like SQL Server or MS-DTC, are also resource groups, but that has all been abstracted to make the interface more friendly and less technical.  Every resource group is owned by a node, and the active ownership changes when you fail between nodes.  If you click on any “Service/Application” in Failover Cluster Management, the top, right pane will show the current owner.

So what happened with your installation and why did it give you an error?  Imagine that you have a single cluster with three local nodes (LocalNode1, LocalNode2, LocalNode3) and two remote nodes (RemoteNode1, RemoteNode2).  The local nodes use one storage array, and the remote nodes use another storage array.  You initially set up the cluster with the local nodes and everything works correctly.  You complete the setup by installing SQL Server, or another application, using all of the disks in Available Storage.

At that point, the “Available Storage” resource group has no disks, but the resource group is owned by one of the local nodes.  Although you can’t manage Available Storage like other resource groups, behind-the-scenes, it is a resource group just like any other.

You then use the Add Node wizard to perform cluster validation, and the nodes are added successfully, except that you get the error mentioned above.  The reason for this is simple:

  • All unused cluster disks are placed into the Available Storage resource group
  • The Available Storage resource group is owned by a local node
  • The possible owners of the new disks are only the remote nodes, because it is an asymmetric storage setup, and that storage array is only available to the remote nodes, not the local nodes.

Therefore, the disks are successfully added, but they cannot be brought online by the owner of Available Storage (a local node) because it cannot be a possible owner of the remote disks.  This is also indicated further in the error message screen above:  Resource for ‘Cluster Disk 1’ has been created but will not be brought online because the disk is not visible from the node which currently owns the “Available Storage” group. To bring this resource online, move that group to a node that can see the disk. The possible owners list for this disk resource has been set to the nodes that can host this resource.

The solution is simple – you just need to change the current owner of Available Storage to be one of the nodes that is a possible owner of the new disks, in this case RemoteNode1 or RemoteNode2.  Unfortunately, there is currently no way to do this through the Failover Cluster Management GUI, and you must resort to cluster.exe:

C:\>cluster.exe group "available storage" /moveto:RemoteNode1

You will immediately see a message saying “Moving resource group ‘available storage’…”, followed by Available Storage with a Status of “Online” on the new node.  If you watch Available Storage in Failover Cluster Management, you will see the disks come online as the move happens.

While this is not initially straight-forward, it is very simple.  In more complex setups, you may have unused cluster disks on several different asymmetric storage devices.  In this case, you will always have at least some disks that are offline, because Available Storage can only be owned by one node at once, and that node will not have access to the asymmetric disks.

The Windows Clustering team is aware of this issue, and I fully expect this to be addressed in a future release.  For now, I am thankful that I have the ability to create an asymmetric storage cluster and will deal with some of the minor issues like this.

Posted in Clustering, Setup, Windows Server | Tagged , , , , , , , , | Leave a comment