Designing an Entity Relationship Diagram (ERD)

As a data analyst, my job requires interacting with databases, often querying data and building new tables. While I’m not a database administrator or know much about managing them, I find it helpful to be able to both create and understand entity-relationship diagrams.

Working for startups I’ve been responsible for documenting existing databases and writing requirements for building new ones. And it’s a lot easier to have an informed discussion about designs when you can give your data engineer a diagram that illustrates what you’re looking for. Welcome the ERD, a visual representation of your database. Why didn’t I learn about this while at university!?

Lucidchart makes this process easy and you can get started for free. If you’re only looking to make a few diagrams and not using over 25mb of storage, the free account works great.

Crows Feet Notation
Crows foot notation. Source: Lucidchart

ERD cardinality

According to Wikipedia, cardinality, as it relates to data modeling, is the numerical relationship between rows of one table and rows in the other. Common cardinalities include one-to-one, one-to-many, and many-to-many.

Here’s the cardinality between two tables, customer and orders using Crows foot notation.

  • ring and dashminimum zero, maximum one (optional)
  • dash and dashminimum one, maximum one (mandatory)
  • ring and crow’s footminimum zero, maximum many (optional)
  • dash and crow’s footminimum one, maximum many (mandatory)
cardinality

In this example, a customer can have zero, one, or many orders whereas an order can only have one, and only one customer. This is the table-to-table cardinality, but we would connect columns between tables showing their relationship.

Here are the same two tables with some columns added.

Database table cardinality

The Customers table has the columns ‘id’ and ‘customer_name’. The Orders table has a primary key of ‘id’ and a foreign key named ‘customer_id’. Here we’ve connected the Customers.id primary key column with the Orders.customer_id column. And the same relationship applies as before, a customer can have zero or many entries in the Orders tables, where each order row will have one, and only one customer id.

Here are some additional examples.

One-to-one

A person can only have one birth certificate and a birth certificate can only have one person.

One-to-one (optional on one side)

A person can have no driver’s license, or they can have one. (Ignoring different types of licenses such as motorcycle, commercial, etc.) A driver’s license can only have one person.

Many-to-one

A person is only born in a single birthplace. A location can have one or many people born there.

Many-to-many

A person can read zero books or many. And a book can be read by zero people or by many.

One-to-many

A customer can have zero orders or many. A single order can have one and only one customer.

Many-to-many

A course can have one or many students. A student can take one or many courses.

Designing your database

Not that we understand how to illustrate relationships between our entities. We need some entities!

Lucidchart Entity Relationship shapes
Lucidchart shapes

Lucidchart provides a number of entity-relationship shapes for building diagrams. There’s a single column, (2) two columns, and a three-column entity shape. The three-column shape can be used to identify your key, column name, and data type. This is the entity that I use most often.

If you already have a database, then you can import its schema directly into Lucidchart (paid feature).

Let’s take a look at a more complete entity.

Here we have a ‘customers’ table and a ‘placed_orders’ table. The ‘customers’ table has an id column that is used as the primary key. In the ‘placed_orders’ table, the customer_id column is the foreign key that creates the relationship back to the ‘customers’ table. A customer can have zero or many placed orders, and an order can have one and only one customer.

From here it’s a matter of building it out. Identify all of the data you would like to, or are, collecting, entities for those data, and how everything relates to each other. Here’s where I usually start collaborating with a database expert, one that can make sure we build something efficient.

Export your schema

Lucidchart SQL export dialog

Lucidchart has a feature that allows you to export your ERD as SQL commands to build your tables. You can choose from four database management systems, MySQL, PostgreSQL, SQL Server, and Oracle.

There’s a lot of work upfront designing your entity-relationship diagram, but at least when it comes time to create your tables, it can be done very quickly.

The 5 stages of data analysis

Any good data analysis starts with a well thought out process. According to Wikipedia, “data analysis is a process of inspecting, cleansing, transforming and modeling data to discover useful information, informing conclusions and supporting decision-making.”

These phases of data analysis have been summarized into 5 distinct stages, as described in The Art of Data Science by Roger D. Peng, Ph.D. and Elizabeth Matsui:

  1. Stating the question
  2. Exploratory Data Analysis
  3. Building formal statistical models
  4. Interpretation
  5. Communication the results

Peng further describes that within each of these five stages, data analysts should engage in a three-step iterative process so there is an opportunity to re-evaluate the findings and refine the step that was just performed.

  1. Setting Expectations
  2. Collecting your data and comparing it to your expectations
  3. Revising your expectations or fixing the data (e.g. collecting more) so your data matches what you expected

The table below illustrates the relationship between each step.

Set ExpectationsCollection InformationRevise Expectations
QuestionQuestion is of interest to the audienceLiterature, Search, DataSharpen Question
Exploratory Data AnalysisData are appropriate for the questionMake exploratory plots of dataRefine question or collect more data
Formal ModelingPrimary model answers questionFit secondary models, sensitivity analysisRevise a formal model to include more predictors
InterpretationInterpretation of analyses provides a specific & meaningful answer to the questionInterpret totality of analyses with a focus on effect sizes & uncertaintyRevise EDA and/or models to provide specific & interpretable answer
CommunicationProcess & results of the analysis are understood, complete & meaningful to the audienceSeek feedbackRevise analyses or approach to presentation

Table: Epicycles of Analysis, Source: The Art of Data Analysis by Roger D. Pend, Ph.D. & Elizabeth Matsui

Let’s take a look at this entire process using an example data set.

1. Stating and refining the question

It’s more than likely that a question will be given to you, or a question that you can work with to refine a better one. Let’s say we had a customer survey that asks them to rate the likelihood they would recommend our product to their friends. And the marketing director would like to know if there is any relationship between the survey results and the customers’ product interest.

Since we now know that we are looking for a relationship between two attributes of a customer, we can determine that we are asking an exploratory question. Exploratory questions require you to analyze data to find patterns and trends between features. But we stop short of forming a fully thought out hypothesis. The findings from your exploratory data analysis will provide supporting evidence for you to form a hypothesis.

Our refined question could be restated as:

Is there a relationship between a customer’s product interest and the Net Promoter Score they gave our product?

Now let’s quickly run through the three-step process.

  1. Setting expectations
    1. The question is of interest to our audience. In this case, it was our marketing department that would like to understand these data better.
    2. The question asks if there is a relationship between NPS and product interest. But it does not identify if there is influence between the two.
  2. Collecting Information
    1. NPS is a common metric. We can do some additional research to see if our NPS is representative of other companies within our industry. For this exercise, we’ll make the assumption it is similar to external data.
    2. Our product interests are unique to our business so we’ll move forward with collecting our own data.
  3. Revise Expectations
    1. There is not a need to revise our statement any further.

To collect our data, we’re going to select the id, nps_score, and interest columns of all customers from the ‘customers’ table in our example database.

SELECT id, nps_score, interests
FROM customers;

This SQL query resulted in 1,000 records. We’ll export these results as a CSV file for further analysis using Python. From a cursory view, we can confirm that we have NPS and product interest data.

2. Exploratory Data Analysis

We’re finally at the point where we’ll start exploring the data, cleaning the data, and examining any relationships between features. There are three primary goals to EDA.

  1. Determine the data is useful and there is enough data to work with
  2. Determine if we have the right data to answer our question
  3. Develop a primary model of the answer to our question

OK, Here we go!

Read in our data

You can follow along with this Deepnote project:

https://beta.deepnote.com/project/96779f66-1176-43e7-8e1f-25a191fa4beb

# read in our data
filename = 'data/nps_scores.csv'
df = pd.read_csv(filename)
print(df.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 3 columns):
id           1000 non-null int64
nps_score    391 non-null float64
interests    789 non-null object
dtypes: float64(1), int64(1), object(1)
memory usage: 23.6+ KB
None

Let’s check our dataset and start cleaning.

We can see that our data does include 1,000 rows, and three columns, id, nps_score, and interests. But there is already some concern. There are only 391 customers with an NPS value. We’ll continue with what we have, but we might want to send out more NPS surveys.

Looking at the top and bottom of the dataset, we can start to get a feel for what we are working with.

# Print the top and bottom 5 rows
print(df.head())
print(df.tail())
 id  nps_score    interests
0  1462        3.0  Fly Fishing
1  1491        2.0  Fly Fishing
2  1492        NaN  Fly Fishing
3  1551        NaN  Fly Fishing
4  1553        NaN  Fly Fishing
       id  nps_score         interests
995  1989       10.0           Surfing
996  1994        6.0  Ski/Snowboarding
997  2000        NaN           Surfing
998  1975        NaN       Fly Fishing
999  1978        4.0       Fly Fishing

We can see that the data frame has an id column that contains a unique customer id value. Let’s use that column as our index.

# Set the index to the id column & drop the original id column
df.index = df['id']
df = df.drop(columns=['id'])
print(df.head())
 nps_score    interests
id                          
1462        3.0  Fly Fishing
1491        2.0  Fly Fishing
1492        NaN  Fly Fishing
1551        NaN  Fly Fishing
1553        NaN  Fly Fishing

From our data frame info above, we know we need to either drop our NaN values, ignore them, or fill in substitutes. Let’s drop any rows with NaN values and check the data frame info.

# Drop NaNs and check the df.info()
df = df.dropna()
print(df.info())
<class 'pandas.core.frame.DataFrame'>
Int64Index: 315 entries, 1462 to 1978
Data columns (total 2 columns):
nps_score    315 non-null float64
interests    315 non-null object
dtypes: float64(1), object(1)
memory usage: 7.4+ KB
None

We’re left with 315 customers that have both an NPS and interest value. Let’s view some descriptive statistics & counts.

df['nps_score'].describe()
count    315.000000
mean       5.219048
std        2.905017
min        1.000000
25%        3.000000
50%        5.000000
75%        8.000000
max       10.000000
Name: nps_score, dtype: float64
df['interests'].value_counts()
Surfing             60
Mountain Biking     57
Ski/Snowboarding    56
Trail Running       49
Fly Fishing         48
Climbing            45
Name: interests, dtype: int64

Hey, we have some summarized data to work with now. As you can see, this is a very simplistic example. In reality, a data set would likely involve many more columns and require more pre-processing.

The next step is to make a plot.

# create a boxplot of nps_score by interests
df.boxplot(column='nps_score',by='interests')
plt.xticks(rotation=45) # rotate x-axis labels so we can read them
plt.show()

We can see that customers who are interested in climbing have a slightly higher median NPS.

  • Can we start to answer our question about the relationships between interests and NPS?
  • Do we have enough data with the 45 responses within climbing?
  • Does this data get us thinking about what we should do next to further explore NPS and interests and develop a hypothesis?

3. Building formal statistical models

A key characteristic of a model is reduction. We want to explain our EDA findings in the most simplistic way possible. Our example is a simplistic one, but depending on your EDA you may be building a more involved model.

In our case, we are able to report descriptive statistics of NPS values for each interest category. Descriptive statistics are the most simple for our question.

# Create a DataFrame of descriptive statistics for each interests value.
stats = pd.DataFrame(group.describe().rename(columns={'nps_score':name}).squeeze()
                         for name, group in df.groupby('interests'))
print(stats)
                  count      mean       std  min  25%  50%   75%   max
Climbing           45.0  5.711111  2.809256  1.0  3.0  6.0  8.00  10.0
Fly Fishing        48.0  4.562500  2.974403  1.0  2.0  4.0  6.25  10.0
Mountain Biking    57.0  5.368421  2.844993  1.0  3.0  5.0  8.00  10.0
Ski/Snowboarding   56.0  4.964286  2.898051  1.0  2.0  5.0  7.00  10.0
Surfing            60.0  5.433333  3.038547  1.0  3.0  5.0  8.00  10.0
Trail Running      49.0  5.265306  2.841475  1.0  3.0  5.0  7.00  10.0

4. Interpretation

Our EDA has revealed that Net Promoter Scores are low across product interests. We may need to revise our question and ask if there is any relationship between NPS and products purchased. In many cases such as this example, EDA leads to further discussion and questions. We’ve also discovered that only a third of our customers have responded to our NPS survey. There’s an opportunity to collect more data.

5. Communicate

Our last stage of data analysis is communicating the results. This often is a report or presentation given to stakeholders. I’ve also found it beneficial to publish my findings on Slack along with the original data. That way you can start a conversation and have colleagues provide their feedback. Some of the best advice I’ve received has come from engineers. Being transparent about the data analysis process goes a long way to creating confidence in your reporting.

A basic example of a Usage Tracking disclosure

User data is a valuable resource. One that is both beneficial to your company as well as to the users themselves. For products to evolve and keep getting better, teams need to understand how their products are being used so they can make good design decisions.

User data is a sensitive resource and often reveals uniquely identifiable traits about a user’s behavior. For this reason, implementing a data classification policy can help protect your company as well as your user. It’s important to also be mindful of following the strict guidelines and rules for GDPR, California Consumer Privacy Act, HIPAA, and any other laws that protect consumers’ data rights in your jurisdiction.

A good example of a data classification standard is one that UC Berkeley has implemented for its institution.

https://security.berkeley.edu/data-classification-standard

User data is a sensitive resource and often reveals uniquely identifiable traits about a user’s behavior.

Once you have implemented strong data protection policies, it’s important to also communicate what data you collect from your users. Below is an example of a data tracking disclosure that would be available to a user of a fictitious application called TakeOut Data.

Example Usage Tracking Disclosure

At TakeOut Data, we collect data to better understand how our users are interacting with our website and products. We use this data to improve our product as well as the user’s experience.

Below is a list of metrics and information that we collect.

Application Version:

This allows us to identify which versions are currently being used. This is helpful when we release updates & bug fixes.

Operating System:

Informs us of what operations system(s) is used to access our application.

Time in the application:

The amount of time that a user spends within our application.

Demographic information:

We collect demographic information when a user creates an account. Some of these data are optional.

  • Name
  • Company Name (optional)
  • Address, City, State, Country (optional)
  • Email
  • Industry (optional)
  • Food Preferences (optional)

Number of orders

The number of orders that are placed within our application

Total amount spent on orders

The total amount spent on orders within our application

Favorites

Which items and merchants that you have marked as favorites. This allows us to make better suggestions for you.


The above example is a short, non-comprehensive list of user attributes and metrics that would be tracked within the TakeOut Data application. It’s a good user experience to make available your data policy and example of tracked data. In some cases, it’s required to make this information available, along with a process for your users to opt-out of marketing tracking and selling of information. And you’ll also want to have processes in place for the total deletion of customer data if required by your laws.