Comprehensive Guide to Tackling a Customer Churn Analysis Using Statistics

Comments · 23 Views

This blog offers a guide for master's students in Statistics to analyze customer churn. It covers data cleaning, exploratory analysis, and model building using logistic regression and other models. Ideal for those seeking Statistics Assignment help service, it provides practical insi

As a master's student pursuing Statistics, you will inevitably face challenging assignments that test your analytical skills and statistical knowledge. One such tough yet rewarding question involves analyzing customer churn for a telecommunications company. Here, we'll walk through the problem and provide a detailed solution, making it an excellent resource for anyone seeking Statistics Assignment help service.

The Problem: Analyzing Customer Churn in a Telecommunications Company

You are provided with a dataset from a telecommunications company containing customer information. The company wants to understand the factors contributing to customer churn (i.e., customers leaving the service). The dataset includes variables such as Gender, SeniorCitizen, Partner, Dependents, Tenure, PhoneService, MultipleLines, InternetService, OnlineSecurity, OnlineBackup, DeviceProtection, TechSupport, StreamingTV, StreamingMovies, Contract, PaperlessBilling, PaymentMethod, MonthlyCharges, TotalCharges, and Churn.

The tasks involved are:

  1. Data Cleaning and Preparation
  2. Exploratory Data Analysis (EDA)
  3. Model Building
  4. Advanced Analysis
  5. Interpretation and Reporting

The Solution

Let’s dive into the solution, step-by-step.

1. Data Cleaning and Preparation

Firstly, generate a random dataset to simulate the real-world scenario:

 
import pandas as pdimport numpy as np# Set random seed for reproducibilitynp.random.seed(42)# Number of samplesn_samples = 1000# Generate random datadata = { 'CustomerID': np.arange(1, n_samples + 1), 'Gender': np.random.choice(['Male', 'Female'], n_samples), 'SeniorCitizen': np.random.choice([0, 1], n_samples), 'Partner': np.random.choice(['Yes', 'No'], n_samples), 'Dependents': np.random.choice(['Yes', 'No'], n_samples), 'Tenure': np.random.randint(1, 73, n_samples), 'PhoneService': np.random.choice(['Yes', 'No'], n_samples), 'MultipleLines': np.random.choice(['Yes', 'No', 'No phone service'], n_samples), 'InternetService': np.random.choice(['DSL', 'Fiber optic', 'No'], n_samples), 'OnlineSecurity': np.random.choice(['Yes', 'No', 'No internet service'], n_samples), 'OnlineBackup': np.random.choice(['Yes', 'No', 'No internet service'], n_samples), 'DeviceProtection': np.random.choice(['Yes', 'No', 'No internet service'], n_samples), 'TechSupport': np.random.choice(['Yes', 'No', 'No internet service'], n_samples), 'StreamingTV': np.random.choice(['Yes', 'No', 'No internet service'], n_samples), 'StreamingMovies': np.random.choice(['Yes', 'No', 'No internet service'], n_samples), 'Contract': np.random.choice(['Month-to-month', 'One year', 'Two year'], n_samples), 'PaperlessBilling': np.random.choice(['Yes', 'No'], n_samples), 'PaymentMethod': np.random.choice(['Electronic check', 'Mailed check', 'Bank transfer', 'Credit card'], n_samples), 'MonthlyCharges': np.round(np.random.uniform(20, 120, n_samples), 2), 'TotalCharges': np.round(np.random.uniform(20, 5000, n_samples), 2), 'Churn': np.random.choice(['Yes', 'No'], n_samples)}# Create DataFramedf = pd.DataFrame(data)# Handle inconsistency where TotalCharges should be greater than or equal to MonthlyCharges * Tenuredf['TotalCharges'] = df.apply(lambda row: max(row['TotalCharges'], row['MonthlyCharges'] * row['Tenure']), axis=1)# Display the first few rowsdf.head()

2. Data Cleaning and Preparation

 
# Check for missing valuesprint(df.isnull().sum())# Convert categorical variables to dummy variablesdf_encoded = pd.get_dummies(df, drop_first=True)# Display the first few rows of the encoded DataFramedf_encoded.head()

3. Exploratory Data Analysis (EDA)

 
import seaborn as snsimport matplotlib.pyplot as plt# Summary statisticsprint(df.describe())# Visualize the distribution of key variablesplt.figure(figsize=(14, 6))plt.subplot(1, 3, 1)sns.histplot(df['MonthlyCharges'], bins=30, kde=True)plt.title('Monthly Charges Distribution')plt.subplot(1, 3, 2)sns.histplot(df['TotalCharges'], bins=30, kde=True)plt.title('Total Charges Distribution')plt.subplot(1, 3, 3)sns.histplot(df['Tenure'], bins=30, kde=True)plt.title('Tenure Distribution')plt.tight_layout()plt.show()# Bar plot for churn by gendersns.countplot(x='Churn', hue='Gender', data=df)plt.title('Churn by Gender')plt.show()# Chi-square test for independencefrom scipy.stats import chi2_contingency# Create a contingency tablecontingency_table = pd.crosstab(df['Churn'], df['Gender'])# Perform chi-square testchi2, p, dof, ex = chi2_contingency(contingency_table)print(f'Chi-square test result: chi2={chi2}, p-value={p}')

4. Model Building

 
from sklearn.model_selection import train_test_splitfrom sklearn.linear_model import LogisticRegressionfrom sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score, roc_curve# Split the data into training and testing setsX = df_encoded.drop(['Churn_Yes'], axis=1)y = df_encoded['Churn_Yes']X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)# Build logistic regression modelmodel = LogisticRegression(max_iter=1000)model.fit(X_train, y_train)# Predictionsy_pred = model.predict(X_test)y_pred_prob = model.predict_proba(X_test)[:, 1]# Evaluationaccuracy = accuracy_score(y_test, y_pred)precision = precision_score(y_test, y_pred)recall = recall_score(y_test, y_pred)roc_auc = roc_auc_score(y_test, y_pred_prob)print(f'Accuracy: {accuracy}')print(f'Precision: {precision}')print(f'Recall: {recall}')print(f'ROC AUC: {roc_auc}')# ROC Curvefpr, tpr, _ = roc_curve(y_test, y_pred_prob)plt.figure()plt.plot(fpr, tpr, color='blue', lw=2, label='ROC curve (area = %0.2f)' % roc_auc)plt.plot([0, 1], [0, 1], color='gray', linestyle='--')plt.xlim([0.0, 1.0])plt.ylim([0.0, 1.05])plt.xlabel('False Positive Rate')plt.ylabel('True Positive Rate')plt.title('Receiver Operating Characteristic')plt.legend(loc="lower right")plt.show()

5. Advanced Analysis

 
from sklearn.feature_selection import RFEfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.svm import SVCfrom sklearn.model_selection import cross_val_score# Feature selection using RFEselector = RFE(model, n_features_to_select=10)selector = selector.fit(X_train, y_train)selected_features = X.columns[selector.support_]print(f'Selected features: {selected_features}')# Compare logistic regression with other modelsrf_model = RandomForestClassifier()svm_model = SVC(probability=True)# Cross-validationmodels = {'Logistic Regression': model, 'Random Forest': rf_model, 'SVM': svm_model}for name, clf in models.items(): scores = cross_val_score(clf, X, y, cv=5, scoring='roc_auc') print(f'{name} ROC AUC: {scores.mean()}')# Refit selected model on training databest_model = RandomForestClassifier()best_model.fit(X_train[selected_features], y_train)best_model_pred_prob = best_model.predict_proba(X_test[selected_features])[:, 1]best_model_roc_auc = roc_auc_score(y_test, best_model_pred_prob)print(f'Best model ROC AUC: {best_model_roc_auc}')

6. Interpretation and Reporting

 
# Interpret logistic regression coefficientscoefficients = pd.DataFrame({'Feature': X.columns, 'Coefficient': model.coef_[0]})coefficients = coefficients.sort_values(by='Coefficient', ascending=False)print(coefficients)# Multicollinearity checkfrom statsmodels.stats.outliers_influence import variance_inflation_factorvif_data = pd.DataFrame()vif_data["feature"] = X.columnsvif_data["VIF"] = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]print(vif_data)# Report findingsreport = """Customer Churn Analysis Report1. Summary of Findings:- The data was cleaned and prepared, and categorical variables were converted to dummy variables.- Exploratory data analysis revealed key insights into the distribution of variables and their relationships with churn.2. Model Building and Evaluation:- A logistic regression model was built to predict customer churn.- The model achieved an accuracy of {accuracy:.2f}, precision of {precision:.2f}, recall of {recall:.2f}, and ROC AUC of {roc_auc:.2f}.3. Advanced Analysis:- Feature selection identified the most significant predictors of churn.- Logistic regression was compared with other classification models, and the best model achieved a ROC AUC of {best_model_roc_auc:.2f}.4. Interpretation:- The logistic regression model identified significant factors associated with churn, such as tenure, monthly charges, and contract type.- Variance inflation factors were checked to ensure no multicollinearity issues.Recommendations:- The company should focus on improving customer retention strategies, particularly for customers with month-to-month contracts and higher monthly charges.- Additional support services and incentives could be targeted towards senior citizens and customers with dependents to reduce churn."""print(report.format(accuracy=accuracy, precision=precision, recall=recall, roc_auc=roc_auc, best_model_roc_auc=best_model_roc_auc))

This comprehensive guide not only solves a complex statistical problem but also serves as a valuable resource for students and professionals seeking Statistics Assignment help service. By following this detailed example, you can enhance your understanding and skills in statistical analysis, data preparation, modeling, and interpretation.

Comments