So, thankfully, Microsoft provided a solution to this problem. While this approach does work, it is not something you are going to implement in 50 or more columns. Then you can have a computed column return the non-NULL column for that data. The idea is to store values that are 100% ASCII in the VARCHAR column, and anything else in the NVARCHAR. One approach is to have two columns for the data, one for each datatype. While some claim that “ disk is cheap“, wasting space negatively impacts query performance, backup size, backup and restore times, etc. However, then we are wasting space for all of the data that could fit into VARCHAR and take up half as much space. When we have data that is most often standard ASCII, but has the potential (whether it ever happens or not) to have non-ASCII characters (names, URLs, etc), we have no choice but to use NVARCHAR. NVARCHAR, being a Unicode datatype, can represent all characters, but at a cost: each character is typically 2 bytes. VARCHAR is typically one byte per character but can’t represent a wide range of accented characters all at the same time (more than would be found on a single 8-bit Code Page). ( NOTE: For recent update, please see “ Update for CTP 2.2” section)įor some time now, many of us have struggled with data that is mostly standard US-English / ASCII characters but also needs to allow for the occasional non-ASCII character.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |