MongoDB on Azure: How to choose the right instance type?

MongoDB on AWSAzure is now a popular platform to deploy and manage MongoDB servers. Once you have chosen Azure as the platform for MongoDB one of the first decisions that you need to make is to select the instance type that you need to deploy. In this matter Azure fortunately is much simpler than AWS . Azure basically offers three types of instances

1. A  series
A series offers general purpose instances that fit most workloads. They are available in various sizes ranging from 0.75 GB to 56 GB. Inside A series you are offered two options – ‘Basic’ and ‘Standard’.  The ‘Basic’ version costs less but does not offer load balancing, auto-scaling etc. From a database perspective the most important difference is that with ‘Basic’ instances your  azures disks (page blobs) are limited to to 300 IOPS/disk whereas with ‘Standard’ instances you can go upto 500 IOPS/disk. This can make a big difference especially with larger instances when you can RAID the disks. Our recommendation is to use ‘Standard’ machines whenever possible to leverage the enhanced I/O. The number of disks that can be attached to a VM depends on the size of the VM. You can go upto 16 disks for  A7 machine. More details can be found here – Virtual machine sizes for Azure.

Continue reading

Three simple steps to improve the security of your MongoDB installation

iStock_000000413656SmallMongoDB Security has been in the news this week for all the wrong reasons. All the talk has been about the 40,000 or so databases that were found exposed by a group of students based in Germany. Some of the databases even contained production data. It’s egregious on several levels – not only do you have production data on a unauthenticated database but it is also left open to the internet. The only surprising thing is that it took this long to get exposed. If you don’t want your mongodb servers to be on the news here are three simple steps to improve the security of your mongodb installation

Continue reading

High performance MongoDB clusters on Amazon EC2

Performance is an important consideration when deploying MongoDB on the EC2 platform. From a hardware perspective MongoDB performance on EC2 is gated primarily by two factors – RAM and disk speed. Typically ( there are always exceptions) CPU should not be an issue.  Memory is no longer a issue – there are plenty of size options (R3, I2, C3/C4) offering a large amount of RAM. For more details on how to choose the right instance type check my other blog post – “How to choose the right EC2 Instance type“.

Continue reading

Fast paging with MongoDB

Paging through your data is one of the most common operations with MongoDB. A typical scenario is the need to display your results in chunks in your UI. If you are batch processing your data it is also important to get your paging strategy correct so that your data processing can scale. 

Lets walk through an example to see the different ways of paging through data in MongoDB. In this example we have a CRM database of user data that we need to page through and display 10 users at a time. So in effect our page size is 10. Here is the structure of our user document

{
    _id,
    name,
    company,
    state
}

Approach 1: Using skip() and limit()

MongoDB natively supports the paging operation using the skip() and limit() commands. The skip(n) directive tells mongodb that it should skip ‘n’ results and the limit(n) directive instructs mongodb that it should limit the result length to ‘n’ results. Typically you will be using the skip() and limit() directives with your cursor  - but to illustrate the scenario we provide console commands that would achieve the same results. Also for brevity of code the limits checking code is also excluded.

//Page 1
db.users.find().limit (10)
//Page 2
db.users.find().skip(10).limit(10)
//Page 3
db.users.find().skip(20).limit(10)
........

You get the idea. In general to retrieve page n the code looks like this

db.users.find().skip(pagesize*(n-1)).limit(pagesize)

However as the size of your data increases this approach has serious performance problems.  The reason is that every time the query is executed the full result set is built up, then the server has to walk from the beginning of the collection to the specified offset. As your offset increases this process gets slower and slower.  Also this process does not make efficeint use of the indexes.  So typically the ‘skip()’ and ‘limit()’ approach is useful when you have small data sets. If you are working with large data sets you need to consider other approaches.

Approach 2: Using find() and limit()

The reason the previous approach does not scale very well is the skip() command. So the goal in this section is to implement paging without using the ‘skip()’ command. For this we are going to leverage the natural order in the stored data like a time stamp or an id stored in the document. In this example we are going to use the ‘_id’ stored in each document. ‘_id’ is a mongodb ObjectID structure which is a 12 byte structure containing timestamp, machined, processid, counter etc. The overall idea is as follows
1. Retrieve the _id of the last document in the current page
2. Retrieve documents greater than this “_id” in the next page

//Page 1
db.users.find().limit(pageSize);
//Find the id of the last document in this page
last_id = ...

//Page 2
users = db.users.find({'_id'> last_id}). limit(10);
//Update the last id with the id of the last document in this page
last_id = ...

Continue reading

Enabling two factor authentication for MongoDirector.com

Enabling Two factor authentication is an important upgrade to the security of your MongoDirector account. If your password is compromised an attacker will still be unable to gain access to your account if he/she does not have access to the authentication device initialized with the two factor secret of your account.

You can enable two factor authentication in three easy steps

1. Log in to your account at https://console.mongodirector.com and navigate to the Settings tab -> Select the ‘Two factor’ auth tab and check “Enable Two factor auth”.

Continue reading

Geographically distributed MongoDB clusters on AWS in the EU region

Amazon recently announced the public availability of its EU central (Frankfurt) region. With this new datacenter AWS now has two datacenters in the EU region – Ireland & Frankfurt. The availability of two datacenters enables you  to improve the georedudancy of your Mongodb replicas.

Here are the steps to setting up a geo redundant mongodb cluster in the EU region on AWS

1. Cluster details

Enter the cluster details – name, version & size to get started

Cluster details to deploy mongod cluster in AWS EU regions

2. Select the region for each replica set

We place the primary in EU-West (Ireland) and the secondary in EU-Central (Frankfurt). For 100% georedundancy you need to place the arbiter in a different region. If you place the arbiter in one of the EU regions and that region goes down your mongodb cluster will not have a quorum and will hence degrade to read only mode. The arbiter is a voting node and does not hold any data. Hence irrespective of where you place the arbiter all the production data and backups are stored in the EU region.

Continue reading

The role of the DBA in NoSQL

What is the role of the DBA in the rapidly evolving world of NoSQL? A majority of the early NoSQL adoption is in the fast growing world of small and medium companies based on public clouds.  In most of these companies the DBA role does not exist and this has led a lot of people to proclaim the end of the DBA.  Is the DBA going down the road of the dinosaur? I think the answer is more nuanced than that. Firstly lets examine a few trends we are seeing in the marketplace that are going to have a great downstream impact on the technology workplace.

Continue reading

Getting started with user management in MongoDB

One of the first tasks after getting your MongoDB database server up and running is to get your users and database configured. In this blog post we will go over some of the common scenarios  of creating and configuring users in MongoDB. MongoDB user management has improved very significantly over the previous two releases and is now a capable and functional user management model. Users can be assigned various roles and roles have privileges. There are several built in user roles or you can create your own custom roles.

The examples in this post use a 2.6.4 client and a 2.6.4 server. Considerable changes were made to the user management model from 2.4 to 2.6.  So if you are using a 2.4 client a lot of the examples in this blog post are not going to work. You can check the version of your mongodb client using the following syntax

mongo --version

Adding a user to a database

The first step after creating your user is to create your application database

use applicationdb;

Now after creating this database we want to create the user that will be used by the application to write to this database. We want this user to have read & write privileges to the database

db.createUser({'user':'appuser', 'pwd':'', roles:['readWrite']});

Sometimes we also want to add users who have read only access to the db. E.g. we might want to add a analytics user who only has read only access to the db

db.createUser({'user':'analyticsuser', 'pwd':'', roles:['read']});

Now that the users are created lets try and connect as this user from the mongodb console

mongo -u 'appuser' -p  <servername>/applicationdb
MongoDB shell version: 2.6.4
connecting to: <servername>/applicationdb
>

So we were able to successfully connect. Note the “/applicationdb” at the end of the syntax tells mongodb to authenticate the ‘appuser’ on the ‘applicationdb’ database

Adding a user to multiple databases

In many scenarios we need to create multiple databases on the server. For example in this scenario we might need to create another database ‘analyticsdb’ to store the results of the analytics. The ‘analyticsuser’ now needs ‘readonly’ access on the ‘applicationdb’ and ‘readWrite’ permissions on the ‘analyticsdb’.

So how do we achieve this? Should we add the ‘analyticsuser’ to each database? This creates a management nightmare over the long term as many users and databases are added. Fortunately there is a simple solution. We can centralize the role assignments for a user and store them in a single database. In this scenario I prefer to store these assignments in the ‘admin’ db since it is the hub of central administration in the server, but you can also store it in a separate db.

use admin
db.createUser({user:'analyticsuser', pwd:'<pass>', roles:[{'role':'read', 'db':'applicationdb'}, { 'role':'readWrite', 'db':'analyticsdb'}]});

Once it is added you can use ‘show users’ to show the details of your users. Here is what my admin db looks like

use admin
> show users
{
"_id" : "admin.admin",
"user" : "admin",
"db" : "admin",
"roles" : [{ "role" : "root","db" : "admin"},{"role" : "restore","db" : "admin"}]
}
{"_id" : "admin.analyticsuser",
"user" : "analyticsuser",
"db" : "admin",
"roles" : [{"role" : "read","db" : "applicationdb"},{"role" : "readWrite","db" : 'analyticsdb"}]
}
>

Continue reading

MongoDB Seattle 2014

Hope to see everybody at MongoDB Seattle, an annual one-day conference for developers, architects and operations professionals to deepen their knowledge and expertise of MongoDB.

MongoDB Seattle will take place on September 16th at the Bell Harbor Conference Centre. This highly productive day of learning and fun will feature advanced technical talks, partner sessions, and one-on-one time with MongoDB experts.

Come stop by our booth and register to win a free Amazon kindle fire that we are giving away!

Continue reading

MongoDB analytics series: Slamdata – Run SQL and build reports directly on MongoDB

This a guest post by John A. De Goes . John is the CTO & co-founder of SlamData. When not working on tricky compilation issues for SlamData, you can find John speaking at conferences, blogging, spending time with his family, and being active in the foothills of the Rocky Mountains. Contact John at john@slamdata.com

MongoDB has been hugely successful in the developer community, partially because it allows developers to store data structures directly in a fast, scalable, modern database.

There's no need to map those data structures to rigid, predefined, and flat tables that have to be reassembled at runtime through lots of intermediate tables. (Described that way, the relational model sounds kind of old fashioned, doesn't it?)

Unfortunately, the world's analytics and reporting software can't make sense of post-relational data. If it isn't flat, if it isn't all uniform, you can't do anything with it inside legacy analytics and reporting solutions!

Continue reading