The quantity and complexity of data in business activities are growing exponentially. The advancement of technology, low-cost hardware and storage devices, and extensive use of the Internet and online applications pushed the projected size of the collected data to be measured into the zettabyte range (10 21 bytes). The term Big Data applies to data sets whose size exceeds the capability of existing tools in capturing, accessing, analyzing, and interpreting data in a reasonable amount of time. Nothing comes without a price. The more data one collects, the more prone they are to making errors or misusing data. In a recent survey, more than 500 organizations admitted that data quality is a problem for them. Furthermore, data quality is measured through many dimensions, making it more complex to determine whether the collected data is usable. Finally, information quality itself has more than one dimension, rendering it more difficult to determine whether data created “good” information. We use a theoretical approach to focus on Big Data’s and information quality’s developmental characteristics. We aimed to create a theoretical framework linking Big Data and information quality to answer the following two questions: Are business organizations able to make use of Big Data? and Are business organizations able to interpret the data so it can support real-time decision making? We depend on a literature review to build a data-quality framework to determine the link between Big Data and information quality.