How does a database handle huge amounts of data that can be retrieved within a second?

The good news is that data isn’t stored in one giant table. It is actually made up of multiple tables.

Tools exist that then link those tables and read the information in them fairly effeciently. Oracle, MySQL, etc handle massive amounts of data pretty easily.  Because these tools don’t actually care what the data is, they just need the reference points, they can quickly retrieve the data. 
 
Think of how RAM works: fast switching pointers. This is how you get a user’s data out of billions in a table. The software doesn’t need to know that the person’s first name is John, it just needs to know that that user’s pointer is here and it points to that table and then that table points to the next connected bit of data, etc. It is a chain of tables. Again, most of the retreival of the data isn’t getting all of the data, it is just getting the pointer.  That pointer leads to the next pointer, and so forth. Just fast swapping pointers and then one display of the data. 
 
Big Data as it is termed, is a huge industry specifically for this challenge. It isn’t perfect, but an entire industry exists simply to find better ways to store data so that that pointer chain can be processed faster and faster.

Leave a Reply

Your email address will not be published. Required fields are marked *