数据行揭秘 Paul Randal 的博客
数据行揭秘 Paul Randal 的博客
http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-record/
This week I'm going to post a bunch of info on the basic structures used to store data and track allocations in SQL Server. A bunch of this was posted back when I started blogging at TechEd 2006 but I want to consolidate/clarify info and add more about using DBCC PAGE to examine the various structures.
So, what are records? At the simplest level, a record is the physical storage associated with a table or index row. Of course, it gets much more complicated than that…
Data records
- Data records are stored on data pages.
- Data records store rows from a heap or the leaf level of a clustered index.
- A data record always stores all columns from a table row – either by-value or by-reference.
- If any columns are for LOB data types (text, ntext, image, and the new LOB types in SQL Server 2005 – varchar(max), nvarchar(max),varbinary(max), XML), then there's a pointer stored in the data record which points to a text record on a different page (the root of a loose tree that stores the LOB value). Exceptions to this are when the schema has been set to store LOB columns 'in-row' when possible. This is when a LOB value is small enough to fit within the size limits of a data record. This is a performance benefit as selecting the LOB column does not require an extra IO to read the text record.
- In SQL Server 2005, non-LOB variable length columns (e.g. varchar, sqlvariant) may also be stored 'off-row' as part of the row-overflow feature of having table rows longer than 8060 bytes. In this case the storage format is the same as for LOB values – a pointer in the data record pointing to a text record.
- There is a difference to how the columns are laid out between heaps and clustered indexes - I'll cover that in a later post.
Forwarded/Forwarding records
- These are technically data records and are only present in a heap.
- A forwarded record is a data record in a heap that was updated and was too large to fit in-place on its original page and so has been moved to another page. It contains a back-pointer to the forwarding record.
- A forwarding record is left in its place and points to the new location of the record. It's sometimes known as a forwarding-stub, as all it contains is the location of the real data record.
- This is done to avoid having to update any non-clustered index records that point back directly to the original physical location of the record.
- Although this optimizes non-clustered index maintenance during updates, it can cause additional IOs during SELECTs. This is because the non-clustered index record points to the old location of the index, so an extra IO might be needed to read the real location of the data row. This is fuel for the heap vs clustered index debate, in favor of clustered indexes.
Index records
- Index records are stored on index pages.
- There are two types of index records (which differ only in what columns they store):
- Those that store non-clustered index rows at the leaf level of a non-clustered index
- Those that comprise the b-tree that make up clustered and non-clustered indexes (i.e. in index pages above the leaf level of a clustered or non-clustered index)
- I'll explain more about the differences between these in a later post as it can be quite complicated (especially the differences between SQL Server 2000 and 2005) and is worth doing in separate posts.
- Index records typically do not contain all the column values in a table (although some do – called covering indexes).
- In SQL Server 2005, non-clustered index records can include LOB values as included columns (with the storage details exactly the same as for data records) and also can have row-overflow data that is pushed off-row (again, in exactly the same way as for data records).
Text records
- Text records are stored on text pages.
- There are various types of text records that comprise the tree structure that stores LOB values, stored on two types of text page. I'll explain how they work and are linked together in a future post.
- They are also used to store variable-length column values that have been pushed out of data or index records as part of the row-overflow capability.
Ghost records
- These are records that have been logically deleted but not physically deleted from a page. The reasons for this are complicated, but basically having ghost records simplfies key-range locking and transaction rollback.
- The record is marked with a bit that indicates it's a ghost record and cannot be physically deleted until the transaction that caused it to be ghosted commits. Once this is done, it is deleted by an asynchronous background proces (called the ghost-cleanup task) or it is converted back to a real record by an insert of a record with the exact same set of keys.
Other record types
- There are also records that are used to store various allocation bitmaps, intermediate results of sort operations, and file and database metadata (e.g. in the per-file fileheader page and database boot page). Again, I'll go into these in later posts (there's a big queue of posts building up :-))
Record structure
All records have the same structure, regardless of their type and use, but the number and type of columns will be different. For instance, a data record from a table with a complex schema may have hundreds of columns of various types whereas an allocation bitmap record will have a single column, filling up the whole page.
The record structure is as follows:
- record header
- 4 bytes long
- two bytes of record metadata (record type)
- two bytes pointing forward in the record to the NULL bitmap
- fixed length portion of the record, containing the columns storing data types that have fixed lengths (e.g. bigint, char(10), datetime)
- NULL bitmap
- two bytes for count of columns in the record
- variable number of bytes to store one bit per column in the record, regardless of whether the column is nullable or not (this is different and simpler than SQL Server 2000 which had one bit per nullable column only)
- this allows an optimization when reading columns that are NULL
- variable-length column offset array
- two bytes for the count of variable-length columns
- two bytes per variable length column, giving the offset to the end of the column value
- versioning tag
- this is in SQL Server 2005 only and is a 14-byte structure that contains a timestamp plus a pointer into the version store in tempdb
NULL bitmap optimization
So why is the NULL bitmap an optimization?
Firstly, having a null bitmap removes the need for storing special 'NULL' values for fixed-length datatypes. Without the null bitmap, how can you tell whether a column is NULL? For fixed-length columns you'd need to define a special 'NULL' value, which limits the effective range of the datatype being stored. For varchar columns, the value could be a zero-length empty string, so just checking the length doesn't work – you'd need the special value again. For all other variable-length data types you can just check the length. So, we nede the NULL bitmap.
Secondly, it saves CPU cycles. If there was no NULL bitmap, then there are extra instructions executed for fixed-length and variable-length columns.
For fixed-length:
- read in the stored column value (possibly taking a cpu data cache miss)
- load the pre-defined NULL value for that datatype (possibly taking a cpu data cache miss, but only for the first read in the case of a multiple row select)
- do a comparison between the two values
For variable-length:
- calculate the offset of the variable length array
- read the number of variable length columns (possibly taking a cpu data cache miss)
- calculate the position in the variable length offset array to read
- read the column offset from it (possibly taking a cpu data cache miss)
- read the next one too (possibly taking another cpu data cache miss, if the offset in step 4 was on the boundary of a cache line size)
- compare them to see if they're the same
But with a NULL bitmap, all you have to do is:
- read the NULL bitmap offset (possibly taking a cpu data cache miss)
- calculate the additional offset of the NULL bit you want to read
- read it (possibly taking a cpu data cache miss)
So, its about even for a lookup of a single fixed-length column, but for variable-length columns, and for multiple row selects, there's a clear advantage to having the NULL bitmap.
Using DBCC IND and DBCC PAGE to examine a row in detail
Let's create an example table to look at:
USE MASTER;
GO
IF DATABASEPROPERTY (N'recordanatomy', 'Version') > 0 DROP DATABASE recordanatomy;
GO
CREATE DATABASE recordanatomy;
GOUSE recordanatomy;
GO
CREATE TABLE example (destination VARCHAR(100), activity VARCHAR(100), duration INT);
GO
INSERT INTO example VALUES ('Banff', 'sightseeing', 5);
INSERT INTO example VALUES ('Chicago', 'sailing', 4);
GO
And we can use DBCC IND again to find the page to look at:
DBCC IND ('recordanatomy', 'example', 1);
GO
The output tells us the data page is (1:143) so we can dump it with DBCC PAGE, using option 3 to get a fully interpreted dump of each record.
DBCC TRACEON (3604);
GO
DBCC PAGE ('recordanatomy', 1, 143, 3);
GO
Remember we need the trace-flag to make the DBCC PAGE output go to the console instead of the error log. The output will contain something like the following:
Slot 0 Offset 0×60 Length 33
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNS
Memory Dump @0x5C76C060
00000000: 30000800 05000000 0300f802 00160021 †0…………..!
00000010: 0042616e 66667369 67687473 6565696e †.Banffsightseein
00000020: 67†††††††††††††††††††††††††††††††††††gSlot 0 Column 0 Offset 0×11 Length 5
destination = Banff
Slot 0 Column 1 Offset 0×16 Length 11
activity = sightseeing
Slot 0 Column 2 Offset 0×4 Length 4
duration = 5
Let's use the record structure I listed above to go through this record and see how things are stored.
-
Byte 0 is the TagA byte of the record metadata.
- Its 0×30, which corresponds to 0×10 (bit 4) and 0×20 (bit 5). Bit 4 means the record has a NULL bitmap and bit 5 means the record has variable length columns. If 0×40 (bit 6) was also set, that would indicate that the record has a versioning tag. If 0×80 (bit 7) was also set, that would indicate that byte 1 has a value in it.
- Bits 1-3 of byte 0 give the record type. The possible values are:
- 0 = primary record. A data record in a heap that hasn't been forwarded or a data record at the leaf level of a clustered index.
- 1 = forwarded record
- 2 = forwarding record
- 3 = index record
- 4 = blob fragment
- 5 = ghost index record
- 6 = ghost data record
- 7 = ghost version record. A special 15-byte record containing a single byte record header plus a 14-byte versioning tag that is used in some circumstances (like ghosting a versioned blob record)
- In our example, none of these bits are set which means the record is a primary record. If the record was an index record, byte 0 would have the value 0×36. Remember that the record type starts on bit 1, not bit 0, and so the record type value from the enumeration above needs to be shifted left a bit (multiplied by two) to get its value in the byte.
- Byte 1 is the TagB byte of the record metadata. It can either be 0×00 or 0×01. If it is 0×01, that means the record type is ghost forwarded record. In this case its 0×00, which is what we expect given the TagA byte value.
- Bytes 2 and 3 are the offset of the NULL bitmap in the record. This is 0×0008 (DBCC PAGE presents multi-byte values in hex dumps as least-significant byte first). This means that there's a 4-byte fixed length portion of the record starting at byte 4. We expect this because we know the table schema.
- Bytes 4 to 7 are the fixed length portion. Again, because we know the table schema, we know to interpret these bytes as a 4-byte integer. Without that knowledge, you'd have to guess. The value therefore is 0×00000005, which is what we'd expect to see as the value of the duration column.
- Bytes 8 and 9 are the count of columns in the record. This is 0×0003 which is correct. Given that there are only 3 columns, the NULL bitmap of one bit per column will fit in a single byte.
- Byte 10 is the NULL bitmap. The value is 0xF8. We need to convert it to binary to make sense of the value. 0xF8 = 11111000. This makes sense – bits 0-2 represent columns 1-3 and they're all 0, meaning the columns aren't NULL. Bits 3-7 represent non-existent columns and they're set to 1 for clarity.
- Bytes 11 and 12 are the count of variable length columns in the record. That value is 0×0002, which we again know to be correct. This means there will be two two-byte entries in the variable length column offset array. These will be bytes 13-14 and 15-16, having values of 0×0016 and0×0021 respectively. Remember that NULL bitmap entries point to the end of the column value – this is done so that we know how long each column is without having to store their length as well.
- So, the final offset is bytes 15 and 16, which means the offset of the start of the first variable length column must be byte 17 (or 0×11 in hex), which agrees with the DBCC PAGE dump. The offset of the end of the first variable length column is 0×0016, so the first value is from byte 17 to byte 21 inclusive. This value is 0x42616E6666. We know from the table metadata that this is the first varchar column, destination. Converting to ASCII gives us the column value 'Banff'. Using similar logic, the second value is from byte 22 to byte 32 inclusive and has the value 'sightseeing'. Both of these match the data we're expecting.
And that's it.
Some of the features of SQL Server 2008 will introduce changes to the record structure – more on those when the features are available in CTPs.
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
· 一个奇形怪状的面试题:Bean中的CHM要不要加volatile?
· [.NET]调用本地 Deepseek 模型
· 一个费力不讨好的项目,让我损失了近一半的绩效!
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· 没有源码,如何修改代码逻辑?
· PowerShell开发游戏 · 打蜜蜂
· 在鹅厂做java开发是什么体验
· WPF到Web的无缝过渡:英雄联盟客户端的OpenSilver迁移实战
2013-10-19 再说一下表分区
2013-10-19 SQLSERVER中的元数据锁