SQL Server 2005 -
SQL Server 2005 blev udgivet 7. november 2005.
Nedenfor bringes to af de bedre artikler om de mange nye ting i forhold til
tidligere SQL versioner. Her er nogle skærmbilleder af
En oversigt over forskellene i kube implementation AS-2000 kontra AS-2005.
By Michael Youssef
Now you can use your .NET Class Library (more
than 5500 classes) to write your
First of these benefits is more powerful
programming model which means that using .NET languages to write your SQL
Server objects is more powerful than with the traditional T-SQL statements,
although that T-SQL is the primary and will still the primary language of SQL
Server. More saver and robust environment for databases has been considered
The really good thing that makes our team (and actually all the developers around the world) happy is that Microsoft will integrate database development tools into the Visual Studio.NET's future versions. So you will use only one Visual Studio to develop and debug your database objects, scripts and also writing your client-side code or middle tier. that is, you will use Visual Studio.NET to develop the entire application. Microsoft will release that version of Visual Studio.NET code-named "Wedbey" with these capabilities.
Stability was one of the most important issue
that Microsoft wanted to optimize in
Now you have the ability to write complex
database coding with .NET languages like C# and that's because of the CLR
Choosing between .NET languages or T-SQL when
writing your SQL Server objects is a question that you will ask yourself each
time you develop
I think that T-SQL is very appropriate when you want a code that will perform only data access, like using INSERT statement to insert some records in a database. You will need to use .NET Languages when you want the code to manipulate complex business logic that needs the power of the .NET Framework Class Library.
An important question that you may ask "Now Yukon can understand .NET code and T-SQL. Where to put the data and where to put the functionality?". In fact, it's up-to-you and up-to-the business logic too. However, you if put the data and the functionality on the server you will decrease the server performance if the functionality takes a lot processor time but you can solve this my placing the data on the server and the functionality on the client.
Microsoft has done a good job improving the database engine, XML and ADO.NET Support and Database Administrator tools.
For Database Administrators Microsoft has folks
worked hard to improve the whole administration process. Most of the
administration tasks now can be done without taking
Replication, SQL Profiler also has been improved
Using .NET you will be able to program
As you know that Indexing was one area where Administrators need more power over it and now in Yukon you can create and maintain index online without taking the server offline and this will be using the T-SQL index option create index with online . Dropping and Altering indexes will be online too.
Using SQL Server code-named "Yukon" and Microsoft Visual Studio.NET code-named "Wedbey" Microsoft takes us many steps forward in the way of Global Digital Systems with .NET GDS.NET.
By Eric Brown
During the planning phase of the next version of
Microsoft® SQL Server™, code-named "
The result of this planning is a new database programming platform that's been extended in a number of directions. First, the ability to host the .NET Framework common language runtime (CLR) extends the database into the land of procedural programming and managed code. Second, .NET Framework hosting integration provides powerful object database capabilities from within SQL Server. Deep support for XML was gained via a full-fledged XML data type which carries all the capabilities of relational data types. Additionally, server-side support for XML Query (XQuery) and XML Schema Definition language (XSD) standards was added. Finally, SQL Server Yukon includes significant enhancements to the T-SQL language.
These new programming models and enhanced languages come together to create a range of programmability that both complements and builds upon the current relational database model. The net result of this architectural work is the ability to create more scalable, reliable, robust applications, and an increase in developer productivity. Another result of these models is a new application framework called the SQL Service Broker—a distributed application framework for asynchronous message delivery. I'll get into a deeper discussion of the Yukon SQL Service Broker later in this article, but let's first look at the major changes and enhancements in the programming model, starting with T-SQL.
Developers can now write stored procedures, triggers, and user-defined functions (UDFs) with any Microsoft .NET-compliant language, including Visual Basic® .NET and C#. In addition, three new objects—user-defined types (UDTs), aggregates, and functions—can be created in managed code. The CLR is the heart of the .NET Framework and provides the execution environment for all .NET Framework-based code. The CLR provides various functions and services required for program execution, including just-in-time compilation, memory management and allocation, type safety enforcement, exception handling, thread management, and security. SQL Server Yukon acts as the host for this functionality in much the same way an operating system hosts an application.
As a result of CLR integration,
The way that
In previous versions of SQL Server, database programmers were limited to using T-SQL when writing code on the server side. With CLR integration, database developers can now perform tasks that were impossible or difficult to achieve with T-SQL alone. Both Visual Basic .NET and C# are modern programming languages offering full support for arrays, structured exception handling, and collections. Developers can take advantage of CLR integration to write code that has more complex logic and is more suited for computational tasks than T-SQL can support.
What's more, Visual Basic .NET and C# offer object-oriented capabilities such as encapsulation, inheritance, and polymorphism. Related code can now be easily organized into classes and namespaces. When working with large amounts of code on the server, this allows you to more easily organize and maintain your code investments. This ability to logically and physically organize code into assemblies and namespaces is a huge benefit, and will allow you to better find and relate different pieces of code in a large database implementation.
When designing UDTs, you'll need to
consider such things as nullability and check constraints. You can even use
private functions to extend the capability of the data type; for example, you
might natively add XML serialization capabilities.
The highest security level, SAFE, allows only computation and access to data. The second level, EXTERNAL_ACCESS, also allows access to external system resources. The least secure level, UNSAFE, has no restrictions except for operations that compromise the stability of the server.
Choosing Between T-SQL and Managed Code
Managed code in the data tier allows
you to perform number crunching and complicated execution logic on database
objects. The .NET Framework features extensive support for string handling,
regular expressions, error capture, and more. Additionally, with the
functionality found in the .NET Framework base class library, database
developers now have full access to thousands of pre-built classes and routines
that can be easily accessed from any stored procedure, trigger, or UDF. For
scenarios that require string handling, mathematical functions, date
operations, system resource access, advanced encryption algorithms, file
access, image processing, or XML data manipulation, managed store procedures,
functions, triggers, and aggregates provide a much simpler programming model
When writing stored procedures, triggers, and UDFs, one decision programmers will now have to make is whether to use traditional T-SQL or a CLR-compliant language such as Visual Basic .NET or C#. The answer depends on the needs of a particular project. T-SQL is best used when the code will perform mostly data access with little or no procedural logic. CLR-compliant languages are best suited for mathematically intensive functions and procedures that feature complex logic, or when you want to build a solution upon the .NET Framework base class libraries.
Code placement is another important
feature of CLR integration. Both T-SQL and CLR-hosted code run inside the
database engine on the server. Having .NET Framework functionality and the
database close together allows you to take advantage of the processing power
of a server machine. I recommend using the SQL Server Profiler to review the
performance of your applications, and then make your choice based on the
empirical results of your testing. The SQL Server Profiler has been enhanced to
provide deep profiling of CLR performance within SQL Server, including a new
graphical output format that can be used for comparison.
In .NET terms, the UDT is a struct or a reference type, not a class or enum. This means that memory usage will be optimized. However, the UDT does not support inheritance and polymorphism. It can have both public and private functions. In fact, tasks such as constraint checking should be done as private classes. If your UDT is constructed of multiple pieces of information—for example, a geospatial type would contain longitude, latitude, and maybe altitude—you will need to provide private fields that contain those elements. Once you've created a UDT, you can use T-SQL to register the type in SQL Server. When this happens, the bits of the DLL are actually stored in the database.
UDFs come in two varieties:
scalar-valued and table-valued. A scalar-valued function is one that returns
a single value such as a string, integer, or bit. Table-valued functions
return a resultset that can be comprised of one or more columns.
You can use a UDAgg to translate data stored in the database into mathematical values. Take, for example, a statistical function. You could store data containing results from a quantitative survey in a relation table. In order to figure out the weighted mean or standard deviation of these results, you could call a UDAgg that calculates and returns that information.
Managed code routines that work as stored procedures are best used when you want to return a simple resultset to the caller with modifications, or if you want to simply execute a data manipulation language (DML) or data definition language (DDL) statement. Once registered on the server, taking advantage of the code is as simple as writing a stored procedure.
To wrap a managed code method using a stored procedure, use the CREATE PROCEDURE statement. CREATE PROCEDURE uses the syntax shown in this prototype:
CREATE PROCEDURE mysproc (@username NVARCHAR(4000))
EXTERNAL NAME YukonCLR:[MyNamespace.CLRCode]:: myfunction
The XML story in SQL Server Yukon
really begins with SQL Server 2000. That version of SQL Server introduced the
ability to return relational data as XML, bulk load and shred XML documents,
and expose database objects as XML-based Web services. The Web Services
Toolkit, also known as SQLXML 3.0, provides Web services capability to stored
procedures, XML templates, and SQL Server UDFs. Two important pieces were
missing in the XML technology: a native XML storage mechanism—the XML data
type—and a sophisticated query language that supports querying
XML has evolved from a representational technology to a wire format and is now seen as a storage format. Persistent storage in XML has become an interesting story, and there are many possible applications of the XML data type. First, XML is useful when you don't know the schema of the object. Second, the XML data type is useful for dynamic schemas in situations where information is reshaped on a continual basis. Third, XML persistence is central to XML document applications. There isn't a single application scenario that best exemplifies proper XML data type usage. The programming example I'll present later in this article uses XML as a storage format for a dynamic schema that represents a customer order in the form of a message to the server.
The XML data type is a full-fledged data type. It has all the powers and capabilities of the other types found in SQL Server. The importance of this can't be overstated. As a full-fledged type, the XML column can be indexed, it can have row and column constraints via XSD schema (although an XSD schema is not needed), and it can be queried using an XQuery expression embedded in T-SQL. These functions are appended to the XQuery W3C specification. In addition, using the xmldt::modify method, the user can add subtrees, delete subtrees, and update scalar values.
The XML data type is flexible. You can choose whether you want to associate an XML Schema Definition (XSD) schema with a column. When a column has an XSD schema association, it is known as typed XML. Without the XSD association, it's untyped XML. An XSD schema can be used to type an XML column after it's imported into the database. It can then be used to create indexes and other metadata for the system catalog. Typed XML columns are significantly faster and more flexible than untyped columns when you're looking to query them. The usage scenario will dictate the XML column type, though most applications will appreciate an XSD schema reference.
Note also that you can store both XML documents and XML fragments in the same column. In addition, you have to create a special "XML" index on XML columns. Doing this creates indexes for tags, values, and paths in the XML value.
When creating your XML column, you should add an XSD document and associate it with the column. The XSD schema association is executed from within the SQL Server Workbench. The XSD association is also possible using DDL:
CREATE TABLE T (pk INT, xCol('mySchema'))
When a schema is imported into the database, it is parsed into various types of components, including ELEMENT, ATTRIBUTE, TYPE (for simple or complex types), ATTRIBUTEGROUP, and MODELGROUP. These components, and any associated namespaces, are then stored in various system tables. You can view these components using dedicated system views. In other words, the schema is not stored intact. This has at least two ramifications. First, the schema tag and its associated attributes are not stored. Instead, the XML tag attributes—targetNamespace, attributeFormDefault, elementFormDefault, and so on—are reassigned to the schema's components. Second, SQL Server makes no provision for recovering and viewing the original schema document once it is imported. It is recommended that you keep a copy of all schemas or store their contents in a dedicated database table. For this, an XML column will suffice. Alternatively, you can retrieve XML schemas using the built-in function XML_SCHEMA_NAMESPACE ('uri'). This returns the content of namespace 'uri'.
When the XML column and
an XSD schema are combined, the query capabilities for XML data are
complementary to normal relational data and queries. While I don't recommend
storing all your data in XML, the ability to take advantage of XML in the
database makes XML document management and other application scenarios a
The XML Query Language, commonly referred to as XQuery, is an intelligent and robust language optimized for querying all types of XML data. With XQuery, you can run queries against variables and columns of the XML data type using the type's associated methods. With the invocation of the query method, you can query across relational and XML data in the same batch.
As with many of the XML
standards, the development of XQuery, which is currently a working draft, is
overseen by the W3C.
XQuery evolved from a query language called Quilt, which was based on a variety of other languages such as XPath 1.0, XQL, and SQL. It also contains a subset of XPath 2.0. You can use your existing skills with XPath and not have to learn an entirely new language. However, SQL Server Yukon contains significant enhancements that go beyond XPath, such as special functions and support for better iteration, sorting of results, and construction.
XQuery statements consist of a prologue and a body. The prologue contains one or more namespace declarations and/or schema imports that create the context for query processing. The body contains a sequence of expressions that specify the query criteria. These statements are then used within one of the aforementioned methods of the XML data type (query, value, exist, or modify).
XQuery is capable of querying typed XML (data that is associated with an XML schema) as well as an XML document.
XQuery Designer is a new tool, integrated into the new SQL Server Workbench, which makes working with XML data easy. XQuery Designer helps you write queries that manipulate and retrieve data from both XML columns and XML documents. The central development theme for the XQuery Designer was to empower XQuery development without making users learn the ins and outs of the XML Query Language.
Up to this point, I've
talked about the new .NET functionality and XML functionality of
Over the last 10 years,
the proliferation of e-commerce applications has created a need for increased
process management across database applications. If you've built an order
entry system or made an online purchase, you are familiar with the workflow
model. When a customer places an order for a book, a transaction must be
committed into the inventory, the shipping, and the credit card systems, and
an order confirmation must be sent through another Web application. Waiting
for each of these processes to happen synchronously doesn't scale well. In
the past, developers had to write complicated stored procedures and use
remote procedure call code to queue messages.
The SQL Server Service
Broker is a
SQL Server Service Broker solves these problems by automatically handling message order, unique delivery, and conversation identification. Once a conversation is established between two service broker endpoints, an application receives each message only once, in the order in which the message was sent. Your application can process messages exactly once, in order, without additional code. Service Broker automatically includes an identifier in every message. An application can always tell which conversation a particular message belongs to.
Service Broker queues messages for delivery. If the target service is not immediately available, the message remains with the sending service and delivery attempts are repeated until the message is sent successfully. This allows a conversation to continue reliably between two services, even if one service is temporarily unavailable at some point during the conversation.
SQL Server Service Broker provides loose coupling between the initiating service and the target service. A service can put a message on a queue and then continue with its application processing tasks, relying on SQL Server Service Broker to ensure that the message reaches its destination. This loose coupling enables scheduling flexibility. The initiator can send out multiple messages, and multiple target services can process them in parallel. Each target service processes messages at its own pace, depending on the system's current workload.
Queuing also allows systems to distribute processing more evenly, reducing the peak capacity required by a server. This can improve overall throughput and performance in database applications. For example, an order entry application might see an increase in requests at every day, resulting in high resource usage and slow response times. With Service Broker, the order entry application need not perform all of the processing for an order when the application receives it. Instead, the application can enter the order and submit requests for background processing such as billing, shipping, and inventory management. Background applications perform the processing reliably over a period of time, while the order entry application continues to receive new orders.
One of the most difficult things to accomplish in a messaging application is to allow multiple programs to read in parallel from the same queue. Situations like this can cause messages to be processed out of order, even if they're received in order.
Consider a traditional order processing application. Message A, containing instructions about creating the order header, and Message B, containing instructions about creating the order line items, are both received on the queue. If both of these messages are dequeued by separate service program instances and processed at the same time, it is possible that the order line item transaction will attempt to commit first, and fail because the order does not yet exist. The failure causes the transaction to roll back and the message to be requeued and processed again, wasting resources. Traditionally, this problem was solved by combining the information from Message A and Message B in a single message. While this approach is straightforward for two messages, it scales poorly to systems that involve coordinating dozens or hundreds of messages.
Service Broker solves this problem by automatically locking all messages related to the same task, so that these messages can only be received and processed by one service program instance. Meanwhile, other service program instances can continue to dequeue and process messages related to other tasks. This allows multiple parallel service programs to work reliably and efficiently.
One of the most useful features of Service Broker is activation. Activation automatically starts a service program to read messages from a queue as they arrive. If messages are arriving faster than they are being handled, additional instances of the service program are started, up to a configured maximum. If the message arrival rate is reduced and an active service program checks the queue and finds that there are no messages available for processing, the service program shuts down. This allows the number of service program instances to grow and shrink dynamically as the load on the service changes. If the system goes down or is rebooted, service programs are automatically started to read the messages in the queue when the system comes back up. Traditional messaging systems lack this behavior, and frequently end up having either too many or too few resources dedicated to a particular queue at any given time.
The integrated design of Service Broker provides benefits for application performance and administration. Integration with SQL Server allows transactional messaging; that is, each loop of a service program—receiving a message, processing the message, and sending a reply message—can be enclosed in a single database transaction. If the transaction fails, all work rolls back, and the received message is requeued so that another attempt can be made to process it. No actions take effect until the application commits the transaction. The application remains in a consistent state, without the added resource overhead and complexity of a distributed transaction coordinator.
Administration is easier when your data, messages, and application logic are all in the database. Only one item—instead of three or four separate components—requires maintenance for disaster recovery, clustering, security, backup, and so on. For instance, with traditional messaging systems, the message store and the database can become out of sync. For example, when one component is restored from a backup, the other component must also be restored from a backup taken at the same time, or the message store and the database may be inconsistent. With a single database for both messages and data, this is no longer an issue.
Using a common development environment is also a benefit. The messaging and data parts of an application can use the same languages and tools in a Service Broker application. This extends the developer's familiarity with database programming techniques to encompass message-based programming. Stored procedures that implement a Service Broker service can be written in either T-SQL or one of the CLR-targeted languages.
integration makes automatic resource management possible. Service Broker runs
in the context of the SQL Server instance, so the broker can maintain an
aggregate view of all messages ready to be transmitted from all databases in
the instance. This allows each database to maintain its own queues while
still maintaining fairness in resource usage across the entire SQL Server instance.
Let's see how these concepts come together in an order entry application where a customer places orders on a Web site. The messages sent between the services are stored in an XML column. For illustration purposes, I'll use an ASP.NET-based Web service that's registered and hosted by an instance of the CLR within SQL Server. This service communicates with a partner to send and receive purchase order information. Using an XML column for the messages allows for schema encapsulation and simplification.
The customer places an order on the Web site. When the order transaction commits, a message containing the order information is put in the Order Entry Service Queue. The order is sent to the Order Entry Service as an XML column. This allows all the order information to be consolidated into one column. For a full view of the process.
The Order Entry Service
starts a transaction, receives the message, and processes it. The Order Entry
Service then sends a request on to the Credit Limit Service to verify credit
status. If it is okay, the Order Entry Service moves to the next step.
The Shipping Service, a T-SQL stored procedure, uses XQuery to parse the XML message. The customer information is used to generate an order shipment. When the shipment is complete, a message is sent to the Billing Service Queue. When the Billing Service Queue receives the "ship complete" message, it updates the order status in the database to indicate the order has been shipped. All messages to and from the Order Entry Service are written to an audit table for later analysis and problem resolution.
You can see from this
simple example how the new