StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Implementing and Managing Large Databases - Essay Example

Cite this document
Summary
The paper "Implementing and Managing Large Databases" establishes that size of a company’s database influences its selection of a DBMS and the database design. If the company operates a large database, various considerations should be made to influence the decision of the DBMS selected.
 …
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER91.7% of users find it useful
Implementing and Managing Large Databases
Read Text Preview

Extract of sample "Implementing and Managing Large Databases"

IMPLEMENTING AND MANAGING LARGE DATABASES al Affiliation) Key words: database, bottlenecks, cloud considerations Contents Contents 2 INTRODUCTION 3 Database 3 Large database 3 DISCUSSION 4 Dealing with a large database 4 Scale-Out vs. Scale-up option 4 DBMS Architecture Considerations 5 Sharding 5 NoSQL 6 Memory Balancing Act 6 Database Virtualization 7 Recommendations for DBMS architecture 7 Considerations made by Database Administrators (DBAs) 8 Cloud Speculations for Large databases 8 Defining Performance Bottlenecks 8 Availability of Large databases 8 Large Database Backup 9 CONCLUSION 9 REFERENCES 11 INTRODUCTION Database A database is a large collection of data organized in a systematic way. Databases ensure that computer programs, select pieces of data in an organized manner. They represent the manner in which end user data is stored, accessed and managed. Databases are managed through database management systems (DBMS). DBMS eliminates data inconsistency inherent in the file system (Custers, 2013). Databases are of different sizes. It is difficult to quantify the size of a database. Different database management systems handle different sizes of databases. What might be large and difficult to handle for one DBMS might be handled at ease in another DBMS. Database technologies are evolving to address the issue of handling large databases. These technologies are dynamic, but the fundamental principles and skills remain the same. Many purveyors are addressing the need for databases that support huge amounts of data; usually in the petabyte range (> 1,000 terabytes) (Kavanagh, 2004). Large database Information technology is dynamic; data is collected as hardware and software advance to handle bulky data. This makes it difficult to define what a large database entails. What is large today will be tiny in the next ten years. A large database can be defined as follows; It is supported across multiple servers, Requires more than 2 database administrators (DBAs), Has more than 40GB of data, It is massively shared, and Has a performance problem; attributable to the amount of data. Implementing and managing large databases has been a problem for most companies. Companies ought to examine and evaluate their database design, to identify the inherent inhibitors to a seamless database management system. DISCUSSION The size of a database is influenced by the data volume, hardware, throughput, and software (Dittrich, 2001). Data volume is represented by table numbers, and/or the size of the data. A small database running on a constrained server will portray characteristics of a large database. Throughput is the measurement of usage levels. If a small database serves 9 million users simulataenously, it will be termed as a large database. The software used explains the database management system employed, as well as its implementation. The database is only good at the weakest point of the four factors. These weaknesses can be compensated in various ways: If the hardware used has a small disk, a compression technique can be used. If the RAM is not enough, a RAM efficient DBMS can be employed. Dealing with a large database Scale-Out vs. Scale-up option In deciding how to scale a large database, there is the scale-up and scale-out options. Scaling-up is not a preferable option in the modern day of database management systems. Large servers tend to have an adverse price to performance ratio, when compared to commodity machines. The performance for every dollar expended on high-end servers is usually low. The next best alternative would be to scale-out. Upgrading a machine,rather than purchasing a faster disk of more RAM, may enhance cost-saving. Purchasing a faster disk or more RAM has associated costs. These costs include; modifying the application to scale-out and additional software licenses. Scaling-out is a scalable and cost-effective solution, for companies using open source software. A collection of commodity servers may be more effective than one ‘Specialty Server.’ If the large database is being run in the cloud, the best option would be to scale-out (Rob, & Coronel, 2002). DBMS Architecture Considerations When implementing a large database, its design in conjunction with the architecture of the DBMS determines its scaling profile. The general saying in the DBMS world is that: “The less the database does, the faster it can do (Penninga, 2008).” The architectural considerations will be based on the general saying. Sharding The principle underlying this technique involves breaking down a slow large database into quick and small databases (shard). The small databases will have dedicated storage and compute. The small databases will be independent; that is, they will not have knowledge of other shards. There will be no coordination between the independent shards. All processes that will incorporate two or more shards will be carried out in the application. This means that; the database will not be in a position to handle the application. For example, a range scan would load data into an application, which would operate across the data. This application is responsible for directing all database requests to the respective shard. Data is dynamic; that is, it grows continuously. If this happens, data will tend to move among shards (re-sharding). This process makes data to suffer from skew. It can be reversed by employing tools such as; Scalebase, DBShards, and ScaleArc (Stephen, 2006). NoSQL Structured Query Language (SQL) provides a platform for controlling data. NoSQL defines a DBMS known as; ‘Key-Value Stores.’ This is an application that runs a key in the NoSQL database. It returns a value associated with the key. Much of the processing is directed to the application. The database becomes scalable by shifting the task of processing to the application. The efficiency of this architecture depends on its performance. If the application scales, this technique would be preferred to other options. This architecture provides a suitable horizontal scalability. Memory Balancing Act Memory is capable of running a hundred times quicker than disk access. That said, companies should increase their memory usage. Solid State Disks (SDD) are also suitable options for boosting performance. Large databases may run exclusively in memory; accompanied by their respective data and indexes. Some databases may reside on disk, by maximizing caching in memory, but would require their respective indexes to reside in memory. Large databases may maximize memory for indexes and data, and allow them to overflow onto disk. Implementing databases can be made possible by employing the memory balancing act architecture (Dittrich, 2001). Database Virtualization A large database may be scaled by database virtualization. The application in this architecture is provided with a logical database. A compute tier handles the actual database calls. The computer tier relies on the storage tier. This technique enables a large database to operate as a lone database when, in the real sense, it is running along an elastic cluster of virtual instances or database servers. Database virtualization, unlike sharding and NoSQL, enhances traditional database functions, which are handled in the database. These functions include; range scans, counts and joints. Database virtualization architecture enhances consistency in the manipulation of data. It also reduces latency and network traffic; by enabling the movement of large databases to the storage nodes. This leads to parallel processing of databases, and it is made possible by employing ScaleDB. Recommendations for DBMS architecture DBAs should consider how a database utilizes the available memory and its target platform. A large database running in a public cloud, with limits on the available RAM, would flood an in-memory database. This would overwhelm such a database if it requires all indexes to reside in memory. When settling for a DBMS for a large database, DBAs should evaluate the planned growth and the inherent RAM limitations. If the database virtualization architecture is opted for, the DBA should consider its local caching capabilities. This architecture does not create a platform for caching of data, for some databases, on the compute tier. Database architecture should enable caching of data, since the database requirements originating from a local cache are faster than those served from disk access (Kavanagh, 2004). Considerations made by Database Administrators (DBAs) Cloud Speculations for Large databases A database administrator should consider the finite limits for large databases operating on a public cloud. It is impossible to add more RAM to the database. This constraint would force a DBA to scale-out. This is a critical consideration when planning for growth and selecting a DBMS. The finite limits include; network bandwidth, RAM, storage, and CPU. In order to combat the constraints inherent on the cloud, companies should be ready to scale-out; from the beginning (Hellerstein, & Stonebraker, 2005). Defining Performance Bottlenecks Resolving hitches associated with large databases involves establishing the factors that influence performance bottlenecks. Database related processes have tradeoffs. If a database has a high read/write ratio, indexes can enhance read performance. If the ratio is low, database performance can be reduced from the continuous index updates. DBAs must decide on whether or not to increase indexes; when managing a large database (Connolly, & Begg, 2002). Availability of Large databases It is the wish of every company to have a high-performance database; that is, it is highly available. Availability of large databases is inhibited by financial and performance constraints. Financial constraints involve the premium charged for a highly-available DBMS, while performance constraints involve writing to two or more systems, and waiting for the response (slowest write). An optional configuration is provided by ScaleDB. It enables writing to memory into multiple storage nodes, and flushes to disk outside the transaction. This increases the performance, since the slowest write is handled outside the transaction; it does not influence database performance (Ward, 2002). Large Database Backup The backup complexity for large databases, operating on multiple servers is compounded. Backup entails a balance between the data backup budget and its loss tolerance. There are two considerations for large database backup. If a company can tolerate data loss for more than twenty four hours, a tape backup may be cost-effective and sufficient. If this is not the case, the company should opt for a highly-available DBMS. A ‘one-size-fits-all’ data backup plan should not be employed across the whole database. An effective backup plan should consider whether backing-up some table can be carried out less frequently than others. DBAs must also consider whether it is possible to only backup changes in the data; to speed up the backup process. The DBAs should also consider the amount of time required to recover and resume operations after a failure; recovery window. A recovery window should; establish the failure, plan recovery, and implement the recovery plan. The recovery time is influenced by the amount of data in the database and its complexity. During a system failure, a company equates the recovery window to revenue lost. If a company has ‘zero-tolerance’ for data loss, it should opt for a highly-available DBMS (Hellerstein, & Stonebraker, 2005). CONCLUSION The size of a company’s database influences its selection of a DBMS and the database design. If the company operates a large database, or predicts that the database will become larger in the future, various considerations should be made to influence the decision of the DBMS selected. These considerations include; hardware used, volume of data, and throughput. Failure to brainstorm on these considerations would result in a slow database. There are business considerations that DBAs need to make when selecting a DBMS and database design: Tolerance for the loss of data and recovery window. If a company wishes to run its large database in the cloud, it should opt for scale-out options when it exhausts a single database server. The company’s DBA should be highly qualified for the job. This is a professional with the ability to ‘fine-tune’ the database; to achieve maximum performance. If all these aspects are met, it would be easy to implement and manage a large database (Penninga, 2008). REFERENCES Connolly, T. M., & Begg, C. E. 2002. Database systems: a practical approach to design, implementation, and management 3rd ed.. Harlow, England: Addison-Wesley. Custers, B. H. 2013. Discrimination and privacy in the information society data mining and profiling in large databases. Berlin: Springer. Dittrich, K. R. 2001. Component database systems. San Francisco: Morgan Kaufmann Publishers. Hellerstein, J. M., & Stonebraker, M. 2005. Readings in database systems 4th ed.. Cambridge, Mass.: MIT Press. Kavanagh, P. 2004. Open source software implementation and management. Amsterdam: Elsevier Digital Press. Penninga, F. 2008. 3D topography: a simplicial complex-based solution in a spatial DBMS. Deft: NCG, Nederlandse Commissie voor Geodesie. Rob, P., & Coronel, C. 2002. Database systems: design, implementation, and management 5th ed.. Boston, MA: Course Technology. Stephen, M. 2006. Databases. Oxford: Butterworth-Heinemann. Ward, S. 2002. Databases. Oxford: Heinemann Educational. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Create a report consisting of an advice package which offers Essay”, n.d.)
Create a report consisting of an advice package which offers Essay. Retrieved from https://studentshare.org/information-technology/1646094-create-a-report-consisting-of-an-advice-package-which-offers-guidelines-advice-and-practicable-recommendations-for-implementing-and-managing-large-databases
(Create a Report Consisting of an Advice Package Which Offers Essay)
Create a Report Consisting of an Advice Package Which Offers Essay. https://studentshare.org/information-technology/1646094-create-a-report-consisting-of-an-advice-package-which-offers-guidelines-advice-and-practicable-recommendations-for-implementing-and-managing-large-databases.
“Create a Report Consisting of an Advice Package Which Offers Essay”, n.d. https://studentshare.org/information-technology/1646094-create-a-report-consisting-of-an-advice-package-which-offers-guidelines-advice-and-practicable-recommendations-for-implementing-and-managing-large-databases.
  • Cited: 0 times

CHECK THESE SAMPLES OF Implementing and Managing Large Databases

Difference between Traditional Marketing and Customer Relationship Management

CRM has its focus on the ability to aggregate knowledge about customers who have a significant in affecting the profitability of a company, and managing these customers for the long-term profitability of the firm.... Marketing, on the other hand, does focus on orchestrating one-on-one communication with early stage prospective customers and on routing of new prospects for managing subsequent marketing and selling activities of a firm....
7 Pages (1750 words) Essay

Role of the International Institute of Finance

More importantly, the management wanted an integrated view of CRM, billing, accounting and events management alongside making its databases accessible to mobile staffers and public-members or non-members.... International Institute of Finance (IIF) is selling packaged knowledge, knowledge space and knowledge events focused around this very knowledge to its clients....
10 Pages (2500 words) Essay

Marketing in Specialist Literature

databases will provide them the data storage solutions.... Tactical level information is meant for middle level management for managing and allocating resources for the organization.... The assembly process in Milly's mopeds start with the parts which are not manufactured in their factory....
8 Pages (2000 words) Essay

Design, Mobility and Mobile Computing

The architecture is based on fieldwork and mostly on the knowledge derived from a large number of reliable sources.... The architecture is based on fieldwork and mostly on the knowledge derived from a large number of reliable sources.... The fieldwork has emphasized the differences between remote mobility, where users travel over long distances, and local mobility, where users walk around within a fixed set of building....
8 Pages (2000 words) Essay

Transforming the Enterprise with IT Phase 2 Individial Project

?? IT can be best viewed as the “use of computers for designing, developing, implementing, supporting and managing information system.... IT professionals perform a variety of duties that range from installing applications to designing complex computer networks and information databases.... The information technology umbrella can be quite large, covering many fields....
4 Pages (1000 words) Essay

Network Management Group and Individual Project

I have got the project to manage and plan the Web Hosting Solutions regarding the implementation of the internet services for the Payland's government three departments (DHSS, DWP, and Tax Office).... I have to establish the intranet for the inter-department communication and… In this report, I will present the Web Hosting Solutions for the Payland's government three departments those are Tax Office, Department of Work & My aim is to design and manage this infrastructure in such a way that this network provides intranet facility to staff of the three departments and internet facility for information site for public....
12 Pages (3000 words) Essay

DIGITAL PROJECT MANAGEMENT

The website offers a mechanism for managing the virtual teams for running the projects.... managing time and cost for implementing a project within an international organization is a crucial management function for attaining successful delivery of projects dealing with procurement, construction, and even engineering.... The website uses a server that is designed in implementing the methodology that is developed....
12 Pages (3000 words) Essay

How Can a CRM Help the International Institute of Finance to Facilitate Its Strategy

The paper "How Can a CRM Help the International Institute of Finance to Facilitate Its Strategy?... compares the benefits of  CRM with the system the organization is currently using and recommends Yost to test the system on a part of real data from IIF Knowledgebase to ensure that the system meets the needs of IIF's activity....
10 Pages (2500 words) Case Study
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us