I spoke about the experience at my workplace while moving from a 2+ TB relational database to a MongoDB cluster (twelve shards). My hope was to convey some of the challenges we encountered and the lessons we learned while working on this project.
Have you ever had to work with a legacy relational database and found that there was little, if any documentation about the existing tables and columns? In my career, this has been a common problem for me, especially when coming up-to-speed on an existing application’s database. It’s bad enough when there is no ER diagram to give you an overview of the database, but many database developers do everything in their power, it seems, to make the database itself opaque.
What are some best practices for making your database easier to understand?
Make Your Table and Column Names Long Enough to be Descriptive
Instead of something like CSTACCT, use a name like CUSTOMER_ACCOUNT. From a self-documenting perspective, there is very little point to using short, cryptic names, when you may have 30 characters (Oracle) or as many as 63 characters (PostgreSQL) to work with.
Establish Abbreviation Standards
When you must abbreviate names, use consistent abbreviations, and maintain a list of approved abbreviations that all database designers and developers may use. Enforce the standard abbreviations through code reviews.
Be Consistent with Table and Column Naming
Another practice that makes for difficult-to-understand databases is the use of inconsistent naming conventions. Will your “flag” columns be named “completed_yn”, “completed_flag”, or “completed_ind”? It probably doesn’t matter which convention you adopt, as long as you use it consistently throughout your database.
Use Table and Column Comments
Most RDBMs today allow you to store comments right in the database itself. This makes it easy to generate a data dictionary. Adopt a coding standard for your data definition SQL that requires table and column comments (sometimes called “remarks”).
Create a Data Dictionary
Whenever you’re responsible for database design or maintenance, publish a data dictionary. There are expensive tools like ErWin that make this easy, but I like to use the free and open-source SchemaSpy. It’s critical that you keep the data dictionary up-to-date, so be sure to publish a revised copy whenever there are revisions to your database schema.
What Do You Gain?
By adopting the practices I’ve described, you’ll gain the powerful ability to help others understand your databases and the data itself. In summary:
- New developers can more easily understand where data is stored in your databases.
- Other teams, outside the software engineering department, can find the data they’re interested in, whether for ad hoc queries or for regularly-scheduled reports.
- The ETL logic for migrating data, when moving to a new system, is much easier to develop, if all existing data is clearly described.
- Along the same lines of data integration, it will be straightforward for ETL developers to identify source and target database elements.
MongoDB Applied Design Patterns is a book that I will read again. I generally don’t say that about technical books, but the strengths of this work are such that many parts merit a second reading.
This book is for folks with some experience using MongoDB. If you’ve never worked with MongoDB before, you should start with another book. Python developers, in particular, will benefit from studying this book, as most of the code examples are in that language. As long as you have some object-oriented programming experience and have worked with the MongoDB shell, though, you’ll have little difficulty following the code examples.
Another group of people who will strongly benefit from this book are those with only relational database experience. The author does a thorough job, particularly in the early sections of the book, of comparing MongoDB with traditional relational database management systems.
I particularly liked the author’s discussion of transactions, in chapter 3. The example is complex, and not a simple debit-credit discussion. You understand through this example that you must write your own transaction management when you give up using a relational database system. To me, this is an important point, and I’m glad that the author spends so much time on this example.
Some of the use cases presented are similar to those in the MongoDB manual, in particular chapters four, five, and six. The remaining use cases go beyond what is described in that manual. All of the discussion in these use cases is thorough. There is typically an explanation of the data model (schema design) and then of standard CRUD operations. The author also goes into not-so-typical operations, like aggregation. I was particularly pleased that each use case includes sharding concerns.
In summary, I highly recommend this book. It’s great to see MongoDB being adopted for so many different uses.
I wanted to give my impressions of the course and to encourage as many folks who are interested to take the free training that 10gen is providing. The next DBA course begins on April 29, and the MongoDB for Developers course (for Java developers) starts on May 13.
There are six weeks of lectures, and a seventh week for a final exam, which is really a hands-on project.
Each week is divided up into six to twelve video “lectures”, of varying length. Some of the lectures are only three or four minutes long, though some are as many as fifteen to twenty minutes. Most of the video lectures are followed by one or two quizzes. These quizzes are almost all multiple-choice questions. You’re given up to three chances to get a quiz answer correct, and can even peek at the answer before submitting your solution. While this may seem like a way to cheat, you’re really cheating yourself if you make no attempt to answer the quiz questions honestly. If you don’t understand the material in the lectures, you will not be able to complete the homework. Think of the quizzes as a way of checking your understanding of the video lectures, and re-watch any lectures where you found the quiz difficult.
There are typically four or five homework problems per week. These are primarily not multiple-choice, but worked problems that require you to actually perform various operations with MongoDB. If you haven’t mastered the material in the lectures, you will not complete the homework successfully, as the problems are not trivial.
The Less Good
Like most MOOCs, this is no substitute for an instructor-led class. You can’t ask questions of the instructor in real-time, but only through the course message board.
The quizzes are somewhat simplistic. This may be somewhat attributable to the test engine that 10gen used (). It would be helpful to students if the questions weren’t multiple choice, and required a bit more understanding.
I found myself referring to the excellent on-line documentation for MongoDB to fill in gaps that were left by the lectures themselves. Many weeks I found myself wanting more information than the lectures provided.
It’s free! 10gen is very wise to offer this training free to the community. The more folks who know how to use MongoDB, the better it is for MongoDB and 10gen.
The DBA course is taught by one of the founders of 10gen, Dwight Merriman. To have someone at that level spending precious time on instruction tells me that 10gen clearly values building up its user base and community.
The homework assignments really test your understanding of the material. 10gen was very ingenuous in making it difficult to cheat on the homework questions. I think the homework is the best part of the course, actually. I’ve referred back to homework questions several times to help me solve a problem at work. The course would be even stronger if similar effort had been put into the quizzes.
I’m really grateful to 10gen for making this training freely available. I was so impressed by my experience with MongoDB for DBAs that I’ve registered for the MongoDB for Java Developers course (M101J) , which begins on May 13. Can’t wait!
Happy New Year!
With a new year here, I’ve been thinking about ways to expand my skill set and the technologies to learn more about in 2012. As I’m still a data guy at heart, it should be no surprise that the technologies that interest me are related to data and databases.
I hoping to do a lot of work with MongoDB this year. In the past, I’ve played around with it some, but it looks as if there will be at least one project at work that will let me get some deep experience using MongoDB. I’m currently reading “MongoDB: The Definitive Guide” by Kristina Chodorow and Michael Dirolf and am really enjoying it. Based on what I’ve read so far, I would say this is one of those classic O’Reilly books: specialized, but so well-written that you almost forget that you’re reading something highly technical.
I attended a number of talks at QCon SF back in November and one topic that recurred frequently was how companies are using Hadoop. It seemed as if every presenter described how their company is finding a way to use at least part of the Hadoop ecosystem. And Hadoop is truly that: an entire ecosystem, encompassing not only the core project, but also Pig, ZooKeeper, Mahout, HBase, and still others. You can find more information at the Apache Hadoop project page. I’m hoping to get a proof-of-concept cluster up and running by the middle of the year.
Full-text search technology falls into an area that, like MongoDB, I have had some exposure to in the past, but would like very much to learn more about. I’m planning to spend some time this year on Lucene first, and then move onto the search server, Solr. I’m fortunate to have some colleagues with a lot of experience in this area, and I intend to mine their knowledge whenever possible.
I believe that those of us steeped in the relational database world can take a great step toward a more data architect mindset by having a deep understanding of XML and its related technologies. My plan this year to is to get a firmer understanding of XML, XPath, XSLT, XML Schema, etc. I’ve done quite a bit of work with XML in the past, but this has typically been in relation to Oracle’s handling of XML documents within the database. I want to gain a broader understanding.
Of course JSON is coming on strong in supplanting XML in some of its previous strongholds. For example, MongoDB’s data model relies on JSON-formatted documents.
This seems like enough to keep me busy outside of my day job for one year!
Monday I attended Mongo Boston 2011 at the Microsoft NERD Center in Cambridge.
The opening keynote by 10gen’s CTO and co-founder Eliot Horowitz struck a couple of very interesting notes.
- 10gen wants MongoDB to be a general-purpose database.
- One of their key principles in building MongoDB is to reduce the number of “knobs” an administrator needs to turn.
Overall I would say the conference was valuable, but could really do with a second day. For one thing, none of the presentations were more than forty-five minutes long. While that length does allow for decent overviews, it’s impossible to get into any real depth with such a limited time.
A second day could also reduce some of the “drinking from a fire hose” effect. I attended eight different presentations, which contained a lot of concepts to absorb.
I wouldn’t recommend these conferences for those who have no experience at all using MongoDB. I’ve worked with it for a little over a year now, so the material was at a good level for my current understanding.
The price was right at $20 or $30 depending on whether you met the early bird deadline or not. In my mind, this pricing is a shrewd strategy by 10gen, as it enables interested students to attend. Building interest and enthusiasm among the up-and-coming developers of tomorrow is a great way to build a community. However, it was gratifying to see that the attendees represented a wide range of ages.
If you get an opportunity to attend one of the upcoming conferences, I think you’ll find the day worth your time.
(Disclosure: O’Reilly Media provided me with a free ebook copy of this book for the purposes of this review. I have done my best not to let that influence my opinions here.)
When a book bills itself as “The Definitive Guide,” well, that’s a tall order to fill. But, except for updates as new releases of HBase roll out, I can’t imagine another book surpassing this one by Lars George.
Lars George has been working with HBase since 2007 and is a full committer to the project as of 2009. He now works for Cloudera (a company providing a commercial flavor of Hadoop, as well as Hadoop support). After reading this book, there’s no question in my mind that George has deep understanding, not only of HBase as a data solution, but of the internal workings of HBase.
George gives the background and history of HBase in the larger context of relational databases and NoSQL, which I found to be very helpful. The many diagrams throughout the book are extremely useful in explaining concepts, especially for those of us coming from a relational database background.
George has an excellent and clear writing style. Take, for example, the section where he discusses The Problem with Relational Database Systems, giving a quick rundown of the typical steps for getting an RDBMS to scale up. The flow of his summary reads like the increasing levels of panic that many of us have gone through when dealing with a database-backed application that will not scale.
As an example of how thorough and comprehensive the book is, look at chapter 2, where there is an extensive discussion of the type and class (not desktop PCs!) of machines suitable for running HBase. George gives a truly helpful set of configuration practices, even down to a recommendation for having redundant power supply units.
Another example of his thoroughness comes where George discusses delete methods (Chapter 3). He shows how you can use custom versioning, while admitting that the example is somewhat contrived. Indeed, right after elaborating the example, there is a distinct “Warning” box that admits that custom versioning is not actually recommended. So, even though you may not implement custom versioning, you do understand it as a feature that HBase provides.
Many of the programming examples come with excellent remarks or discussions of the tradeoffs implicit in the techniques, including performance and scaling concerns. Java developers will be most comfortable with the majority of examples, but they can be followed by anyone with some object-oriented programming experience.
I really appreciated the thorough discussion in chapter 8 (“Architecture”) of subjects like B+ trees vs. Log-Structured Merge Trees (LSMs), the Write-Ahead Log, and seeks vs. transfers, topics which are relevant not only to HBase but to many database systems of varying architectures.
The level of thoroughness is also the book’s only weakness. I’m not sure who the target audience for this book is, because it serves both developers and system or database administrators. While nearly every imaginable HBase topic is touched upon, some would have been better off merely listed, with appropriate references given to sources of more information (for example, all those hardware recommendations). The print edition of the book is 552 pages.
Still, a complaint that a book is too detailed shouldn’t be interpreted as much of a complaint. Anyone with an interest in NoSQL databases in general, and HBase in particular should read and study this book. It’s not likely to be superseded in the future.
The catalog page for “HBase: The Definitive Guide”.