![]() When looking back at the best practices on table layout, medium and wide table layouts tend to have a few too many nullable columns, being placeholders for potential values.ĭue to how PostgreSQL stores nullable values that are NULL, those are almost free. Anyhow, you don’t immediately have to jump over we’ll cover most of them here □. ![]() The PostgreSQL Wiki provides a great list of best practices regarding data types to use or avoid. However, you should be discouraged from using them. ![]() PostgreSQL (and the SQL standard in general) offers a great set of basic data types, providing a perfect choice for all general use cases. TimescaleDB’s compression algorithms (and, to some extent, the default PostgreSQL’s) help decrease disk space requirements and IOPS, improving cost, manageability, and query speed.īut let’s get to the actual topic: best practices for data types in TimescaleDB. While non-volatile memory express (NVMes) transfer protocols and similar technologies can help you optimize for high IOPS, they’re not limitless. Reading this amount of data from disk requires a lot of I/O operations ( IOPS input/output operations per second), which is one of the most limiting factors in cloud environments, and even on-premise systems (due to how storage works in general). Querying, aggregating, and analyzing are some of the others. Hence, it’ll grow continuously and require disk storage.īut that’s not the only issue with big data. ![]() Before We Start: CompressionĮvent-like data, such as time series, logs, and similar use cases, are notorious for ever-growing amounts of collected information. While unable to answer the requirement question, a few alternatives may be provided (such as integers instead of floating point)-but more on that later. In this installment of the best practices series (see our posts on narrow, medium, and wide table layouts, single or partitioned hypertables, and metadata tables), we’ll have a look at the different options in PostgreSQL and TimescaleDB regarding most of these questions. That is specifically true when working on applications meant to store (and analyze) massive amounts of data, such as time series, log data, or event-storing ones.ĭeciding what data types are best suited to store that kind of information comes down to a few factors, such as requirements on the precision of float-point values, the actual values content (such as text), compressibility, or query speed. A good, future-proof data model is one of the most challenging problems when building applications. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |