The data from this analysis is from kaggle US Accidents: A Countrywide Traffic Accident Dataset (2016 - 2019), which contains accident data are collected from February 2016 to December 2019 in 49 US states, more documentation about this data can be found here.
The purpose of this analysis is to analyze the data and find out what are the key variables that impact the severity of the traffic accidents that happened in US and ultimately predict the severity of the accidents based on given variables through data visulization and regression analysis using Python.
The analysis starts with loading data and necessary library.
# Load necessary library
import os
import pandas as pd
import numpy as np
import math
import datetime
from datetime import timedelta
import matplotlib.pyplot as plt
from matplotlib.ticker import StrMethodFormatter
import matplotlib.ticker as mtick
import seaborn as sns
import folium
import branca.colormap as cm
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
import geopandas as gpd
import warnings
warnings.filterwarnings('ignore')
plt.style.use('seaborn')
%matplotlib inline
plt.rcParams["figure.figsize"] = (15,8)
Previewing the data and some summary statistics
# Load and preview data
accident = pd.read_csv("/Users/leo/Personal/GitHub/US_Accident_Analysis/US_Accidents_Dec19.csv")
accident.head()
# Summary Statistics
accident.describe()
Checking Columns for nas
# Check each column for nas
accident.isnull().sum()
Cleanning the data by removing unnecessary columns that won't be used in the analysis. Then review the cleaned dataset
# Exclude unnecessary columns
exclude = ["TMC","End_Lat","End_Lng","Description","Number","Street","Timezone",
"Airport_Code","Weather_Timestamp","Civil_Twilight","Nautical_Twilight","Astronomical_Twilight"]
accident_clean = accident.drop(exclude,axis=1)
accident_clean.head()
# Check nas after excluding unnecessary columns
accident_clean.isnull().sum()
To prepare the dataset for further analysis, some additional columns are added
During this step, also excluded records in 2015 and 2020 where there aren't enough data in these two years. This should give us a relatively clean dataset to start the analysis.
# Adding calculation of time difference of start and end time in minutes
accident_clean.Start_Time = pd.to_datetime(accident_clean.Start_Time)
accident_clean.End_Time = pd.to_datetime(accident_clean.End_Time)
accident_clean["Time_Diff"] = (accident_clean.End_Time - accident_clean.Start_Time).astype('timedelta64[m]')
accident_clean["Start_Date"] = accident_clean["Start_Time"].dt.date
accident_clean["End_Date"] = accident_clean["End_Time"].dt.date
accident_clean["Year"] = accident_clean["Start_Time"].dt.year
accident_clean["Month"] = accident_clean["Start_Time"].dt.month
accident_clean["Day"] = accident_clean["Start_Time"].dt.day
accident_clean["Hour"] = accident_clean["Start_Time"].dt.hour
# Excluding accidents in 2015 and 2020 where there's not enough data
accident_clean = accident_clean[(accident_clean["Year"] > 2015) & (accident_clean["Year"] < 2020)]
group = accident_clean.groupby(["Year"]).agg(Count = ('ID','count'))
# Verify data
accident_clean.head()
First group the data by year and Severity and use size()
to count the records in each group, then use unstack()
to pivot the result.
Most of the accidents falls under Severity 2 and 3 and the number of accidents are increasing year over year.
# Examine data
accident_clean.groupby(["Year","Severity"]).size().unstack()
Plot the data to visualize year over year traffic accident trend
# accident_clean.groupby(["Start_Date","Severity"])["ID"].count()
# Group by year and Group by year and severity
group_year = accident_clean.groupby(["Year"]).agg(Count = ('ID','count'))
group_year_sev = accident_clean.groupby(["Year","Severity"]).size().unstack()
# YoY Total Accident Count
# fig = plt.figure(figsize=(15,8))
# plt.subplot(1, 2, 1)
plt.plot(group_year.index, group_year["Count"])
plt.gca().xaxis.set_major_formatter(StrMethodFormatter('{x:,.0f}'))
plt.xticks(np.arange(2016, 2020, 1.0))
plt.show
Then plot the data by Severity and by Year to show the trend of accidents by year and by accident severity.
From the plot, we can see that:
# YoY trend by severity, more in 2, 1 and 4 looks like flat, need to see in a bar plot
# fig = plt.figure(figsize=(15,8))
group_year_sev2 = accident_clean.groupby(["Year","Severity"]).agg(Count = ('ID','count')).reset_index()
sns.lineplot(x='Year',y='Count',hue="Severity",data=group_year_sev2,palette="Set1")
plt.gca().xaxis.set_major_formatter(StrMethodFormatter('{x:,.0f}'))
plt.xticks(np.arange(2016, 2020, 1.0))
plt.show
The following bar plots are also showing the similiar information then observed previously
# YoY Severity Count
# group_year_sev = accident_clean.groupby(["Year","Severity"]).agg(Count = ('ID','count'))
# group_year_sev
accident_clean.groupby(["Year","Severity"]).size().unstack().plot(kind='bar',stacked=True)
The 100% stacked bar plot shows the % breakdown of each severity by year.
# Makes more sense to show stacked 100%, a different view
accident_clean.groupby(["Year","Severity"]).size().groupby(level=0).apply(
lambda x: 100 * x / x.sum()
).unstack().plot(kind='bar',stacked=True)
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
plt.legend(loc = 'upper right',title = 'Severity')
plt.show()
The dataset also contains some variables regarding the weather condition when the accidents happened. We also want to examine those variables to see if any of the weather related variables have impact on the severity of the accidents.
We want to use boxplot from seaborn
library to see how the weather related variable changed in different type of accidents.
Below is a boxplot of Temperature and Severity, we can see that there are almost no difference in median temparature in Severity 1,2 and 3, while lower medium temperature in severity 4, which might indicate that lower temperature might result to more severe accidents.
# Boxplot to show if temperature has impact on the severity of the accident,
# looks like the more severe accident has lower temperature
sns.boxplot(x="Severity", y="Temperature(F)", data=accident_clean, palette="Set1")
Below is a boxplot of Humidity and Severity, similiarly, we can see that higher humidity might lead to more severe accidents.
# Boxplot to show if temperature has impact on the severity of the accident,
# looks like the more severe accident has lower temperature
sns.boxplot(x="Severity", y="Humidity(%)", data=accident_clean, palette="Set1")
Below is a boxplot of Wind Chill and Severity, similiarly, we can see that lower Wind Chill might lead to more severe accidents.
# Examine wind chill and accident severity, lower wind chill cause more severe accidents
sns.boxplot(x="Severity", y="Wind_Chill(F)", data=accident_clean, palette="Set1")
There are some other categorical variables that might have impact on the severity of the accidents, we are using crosstab()
function to get a count of the accidents in each group
# Count of Severity by Sunrise_Sunset to see if more severe accidents happened at night
pd.crosstab(accident_clean["Severity"], accident_clean["Sunrise_Sunset"],
rownames=['Severity'], colnames=['Sunrise_Sunset'])
The 100% stacked barplot below indicates that more severe accidents are happening during the night then in the day time.
# Severity 1 and two has same % between day and night while 3 and 4 has more accidents % at nights
accident_clean.groupby(["Severity","Sunrise_Sunset"]).size().groupby(level=0).apply(
lambda x: 100 * x / x.sum()
).unstack().plot(kind='bar',stacked=True)
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
plt.legend(loc = 'upper right',title = 'Sunrise/Sunset')
plt.show()
The plot below shows that most accidents are happening on the right side of the road
# Most accidents happened on the right side of the road
# Severity 3 has more on right then the left side of the road
accident_clean[accident_clean.Side != " "].groupby(["Severity","Side"]).size().groupby(level=0).apply(
lambda x: 100 * x / x.sum()
).unstack().plot(kind='bar',stacked=True)
plt.legend(loc = 'upper right')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
plt.show()
The plor below shows by month, the break down of the accidents by severity, more severe accidents (Severity 3 and 4) happened in June and July.
# Examining Severity by month, most severe accidents (3 and 4) happened in June and July
accident_clean.groupby(["Month","Severity"]).size().groupby(level=0).apply(
lambda x: 100 * x / x.sum()
).unstack().plot(kind='bar',stacked=True,figsize = (15,8))
plt.legend(loc = 'upper right')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
plt.show()
For numeric variables that are severely skewed, boxplot won't be a good interpretation visually, for those variables, we are using groupby()
to get the mean by Severity
sns.distplot(accident_clean['Distance(mi)'])
accident_clean.groupby('Severity')['Distance(mi)'].mean()
# df[(df['year'] > 2012) & (df['reports'] < 30)]
# sns.boxplot(x="Severity", y="Wind_Speed(mph)",
# data=accident_clean[accident_clean["Wind_Speed(mph)"] <= 50], palette="Set1")
accident_clean.groupby('Severity')['Time_Diff'].median()
# sns.boxplot(x="Severity", y="Time_Diff", data=accident_clean, palette="Set1")
As expected, more severe accidents will affect longer distances and last longer time.
There are another set of variables that describes the condition of the road, wuch as whether there's a bump in the road, or whether there's traffic signal.
The following analysis will focus on these variables.
First select these variables and Severity into a new dataset called accident road
.
accident_road = accident[['Severity','Amenity', 'Bump','Crossing','Give_Way',
'Junction','No_Exit','Railway','Roundabout','Station',
'Stop','Traffic_Calming','Traffic_Signal','Turning_Loop']]
accident_road.head()
In order to plot these variables in the same grid, we used melt()
and then groupby()
to reshape the dataset.
accident_road_melt = pd.melt(accident_road,id_vars =['Severity'],value_vars=['Amenity', 'Bump','Crossing','Give_Way',
'Junction','No_Exit','Railway','Roundabout','Station',
'Stop','Traffic_Calming','Traffic_Signal','Turning_Loop'])
group_road = accident_road_melt.groupby(["Severity","variable","value"]).agg(Count = ('value','count')).reset_index()
group_road.head()
# pd.pivot_table(data=accident_road_melt,index='Severity',columns=['value'],aggfunc='count')
Following plot shows for each variable break down by accident severity, the count of True and False.
From the plot, Corssing, Junction, Tracffic Signal have some impact.
g = sns.catplot(x="Severity", y="Count",
hue="value", col="variable",
col_wrap=3, data=group_road, kind="bar",
height=4, aspect=.7)
g.fig.set_figwidth(15)
g.fig.set_figheight(8)
Similiarly, the output below summarized the Trues and Falses by each variables grouped by Severity.
# Count of True and False of each road condition and group by Severity
(accident_road.set_index('Severity')
.groupby(level='Severity')
# to do the count of columns nj, wd, wpt against the column ptype using
# groupby + value_counts
.apply(lambda g: g.apply(pd.value_counts))
.unstack(level=1)
.fillna(0))
Using the heatmap()
function from seaborn
library, we want to know for each month, which day of the month are more likely to have more accidents.
From the heatmap below, we can see that more accidents are happening in the second half of the year and it looked like high accidents days are spead randomly and varied in each month.
# More accidents are happening in the second half of the year
group_day = accident_clean.groupby(["Month","Day"]).size().unstack()
ax = sns.heatmap(group_day, cmap="YlGnBu",linewidths=0.1)
Similiarly, we want to see in what time of the day, more accidents will happen using a heatmap.
As expected, Most accidents happened between 7 and 8, which is the morning rush hour. Morning rush hour have much more accidents then the afternoon rush hour, which is 4 to 6 in the afternoon. This trend is observed consistently throughout each day of the month.
# Most accidents happened between 7 and 8, which is the morning rush hour
# morning rush hour have much more accidents then the afternoon rush hour, which is 4 to 6 in the afternoon
group_hour = accident_clean.groupby(["Day","Hour"]).size().unstack()
ax = sns.heatmap(group_hour, cmap="YlGnBu",linewidths=0.1)
After wrapping up the data visualization of numeric and categorical variables, we also want to visualiza geo data using a map.
The static map below are plot using Basemap
from mpl_toolkits
. The scatter plot is a good way to start.
from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
# A basic map
# m=Basemap(llcrnrlon=-160, llcrnrlat=-75,urcrnrlon=160,urcrnrlat=80)
m = Basemap(llcrnrlat=20,urcrnrlat=50,llcrnrlon=-130,urcrnrlon=-60)
# m = Basemap(projection='lcc', resolution='h', lat_0=37.5, lon_0=-119,
# width=1E6, height=1.2E6,
# llcrnrlat=20,urcrnrlat=50,llcrnrlon=-130,urcrnrlon=-60)
m.shadedrelief()
m.drawcoastlines(color='gray')
m.drawcountries(color='gray')
m.drawstates(color='gray')
lat = accident_clean.Start_Lat.tolist()
lon = accident_clean.Start_Lng.tolist()
x,y = m(lon,lat)
m.plot(x,y,'bo',alpha = 0.2)
The next step is the plot a Choropleth map using folium
library.
First download the US shape file from US Census. The shapefile used in this analysis are state shape file and zip code shape file.
We are using geopandas
to load the shapefile, then groupby()
state or zipcode and merge()
to get the final dataset for map plotting.
# US Shape file from https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_state_500k.zip
shapefile_state = '/Users/leo/Personal/Python/cb_2018_us_state_500k/cb_2018_us_state_500k.shp'
#Read shapefile using Geopandas
gdf_state = gpd.read_file(shapefile_state)
gdf_state.head()
group_state = accident_clean.groupby(["State"]).agg(Count = ('ID','count'))
group_state.reset_index(level=0, inplace=True)
group_state[:5]
# Merge shape file with accident data
state_map = gdf_state.merge(group_state, left_on = 'STUSPS', right_on = 'State')
state_map.head()
# group_state
m = folium.Map(location=[37, -102], zoom_start=4)
folium.TileLayer('CartoDB positron',name="Light Map",control=False).add_to(m)
# myscale = (state_map['Count'].quantile((0,0.2,0.4,0.6,0.8,1))).tolist()
m.choropleth(
geo_data=state_map,
name='Choropleth',
data=state_map,
columns=['State','Count'],
key_on="feature.properties.State",
fill_color='YlGnBu',
# threshold_scale=myscale,
fill_opacity=1,
line_opacity=0.2,
legend_name='Count of Accidents',
smooth_factor=0
)
style_function = lambda x: {'fillColor': '#ffffff',
'color':'#000000',
'fillOpacity': 0.1,
'weight': 0.1}
highlight_function = lambda x: {'fillColor': '#000000',
'color':'#000000',
'fillOpacity': 0.50,
'weight': 0.1}
toolkit = folium.features.GeoJson(
state_map,
style_function=style_function,
control=False,
highlight_function=highlight_function,
tooltip=folium.features.GeoJsonTooltip(
fields=['State','Count'],
aliases=['State: ','# of Accidents: '],
style=("background-color: white; color: #333333; font-family: arial; font-size: 12px; padding: 10px;")
)
)
m.add_child(toolkit)
m.keep_in_front(toolkit)
folium.LayerControl().add_to(m)
m