phillust-blog
phillust-blog
Data Analysis Course
8 posts
Don't wanna be here? Send us removal request.
phillust-blog 8 years ago
Text
Data Analysis Tools Week 4 Assignment
I'm looking into whether the region someone lives in moderates someone's depression and associated thoughts of suicide.聽 The output shows that region is a significant moderator to the interaction between depression and associated suicidal ideation as all p-values are significant.
Code:
# -*- coding: utf-8 -*- """ @author: Phil Lust """
import pandas import numpy import scipy.stats import seaborn import matplotlib.pyplot as plt import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi
#Data Read in data = pandas.read_csv('nesarc_pds.csv', low_memory=False)
#set pandas to show all columns in dataframe pandas.set_option('display.max_columns', None)
#set pandas to show all rows in dataframe pandas.set_option('display.max_rows', None)
# sets strings to uppercase in all Dataframe column names data.columns = map(str.upper, data.columns)
# bug fix for display formats to avoid runtime errors pandas.set_option('display.float_format', lambda x:'%f'%x)
#set variables of interest to numeric data['S4AQ4A17'] = pandas.to_numeric(data['S4AQ4A17'], errors='coerce') data['S1Q213'] = pandas.to_numeric(data['S1Q213'], errors='coerce') data['REGION'] = pandas.to_numeric(data['REGION'], errors='coerce')
#Replacing unknown values in the dataframe data['S4AQ4A17']=data['S4AQ4A17'].replace(9,numpy.nan) data['S1Q213']=data['S1Q213'].replace(9,numpy.nan)
#Recode of values for logical consistency recode1 = {1:1, 2:0} data['S4AQ4A17'] = data['S4AQ4A17'].map(recode1) recode2 = {1:4, 2:3, 3:2, 4:1, 5:0} data['S1Q213'] = data['S1Q213'].map(recode2)
#data subsets for moderator exploration of Region sub1 = data[(data['REGION']==1)] sub2 = data[(data['REGION']==2)] sub3 = data[(data['REGION']==3)] sub4 = data[(data['REGION']==4)]
#crosstab table of observed observations ct1=pandas.crosstab(sub1['S4AQ4A17'], sub1['S1Q213']) print (ct1)
#column percentages colsum1 = ct1.sum(axis=0) colpct1 = ct1/colsum1 print (colpct1)
#chi-square on Northwest moderator print('association between depression and suicidal thoughts in the Northwest') cs1 = scipy.stats.chi2_contingency(ct1) print(cs1)
#crosstab table of observed observations ct2=pandas.crosstab(sub2['S4AQ4A17'], sub2['S1Q213']) print (ct2)
#column percentages colsum2 = ct2.sum(axis=0) colpct2 = ct2/colsum2 print (colpct2)
#chi-square on Midwest moderator print('association between depression and suicidal thoughts in the Midwest') cs2 = scipy.stats.chi2_contingency(ct2) print(cs2)
#crosstab table of observed observations ct3=pandas.crosstab(sub3['S4AQ4A17'], sub3['S1Q213']) print (ct3)
#column percentages colsum3 = ct3.sum(axis=0) colpct3 = ct3/colsum3 print (colpct3)
#chi-square on South moderator print('association between depression and suicidal thoughts in the South') cs3 = scipy.stats.chi2_contingency(ct3) print(cs3)
#crosstab table of observed observations ct4=pandas.crosstab(sub4['S4AQ4A17'], sub4['S1Q213']) print (ct4)
#column percentages colsum4 = ct4.sum(axis=0) colpct4 = ct4/colsum4 print (colpct4)
#chi-square on West moderator print('association between depression and suicidal thoughts in the West') cs4 = scipy.stats.chi2_contingency(ct4) print(cs4)
Output:
S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽聽聽聽聽聽 672聽聽聽聽聽聽 616聽聽聽聽聽聽 448聽聽聽聽聽聽 152聽聽聽聽聽聽聽 53 1.000000聽聽聽聽聽聽 131聽聽聽聽聽聽 236聽聽聽聽聽聽 179聽聽聽聽聽聽 104聽聽聽聽聽聽聽 49 S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽 0.836862聽 0.723005聽 0.714514聽 0.593750聽 0.519608 1.000000聽 0.163138聽 0.276995聽 0.285486聽 0.406250聽 0.480392 association between depression and suicidal thoughts in the Northwest (95.327124345434868, 9.7091644408750795e-20, 4, array([[ 590.3875聽聽聽 ,聽 626.41363636,聽 460.9875聽聽聽 ,聽 188.21818182, 聽聽聽聽聽聽聽聽聽 74.99318182], 聽聽聽聽聽聽 [ 212.6125聽聽聽 ,聽 225.58636364,聽 166.0125聽聽聽 ,聽聽 67.78181818, 聽聽聽聽聽聽聽聽聽 27.00681818]]))
S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽聽聽聽聽聽 852聽聽聽聽聽聽 759聽聽聽聽聽聽 426聽聽聽聽聽聽 181聽聽聽聽聽聽聽 59 1.000000聽聽聽聽聽聽 160聽聽聽聽聽聽 277聽聽聽聽聽聽 216聽聽聽聽聽聽 105聽聽聽聽聽聽聽 43 S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽 0.841897聽 0.732625聽 0.663551聽 0.632867聽 0.578431 1.000000聽 0.158103聽 0.267375聽 0.336449聽 0.367133聽 0.421569 association between depression and suicidal thoughts in the Midwest (105.24586010821909, 7.507621975326203e-22, 4, array([[ 748.64327485,聽 766.39766082,聽 474.92982456,聽 211.57309942, 聽聽聽聽聽聽聽聽聽 75.45614035], 聽聽聽聽聽聽 [ 263.35672515,聽 269.60233918,聽 167.07017544,聽聽 74.42690058, 聽聽聽聽聽聽聽聽聽 26.54385965]]))
S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽聽聽聽聽 1358聽聽聽聽聽 1062聽聽聽聽聽聽 796聽聽聽聽聽聽 341聽聽聽聽聽聽 155 1.000000聽聽聽聽聽聽 233聽聽聽聽聽聽 330聽聽聽聽聽聽 298聽聽聽聽聽聽 188聽聽聽聽聽聽 115 S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽 0.853551聽 0.762931聽 0.727605聽 0.644612聽 0.574074 1.000000聽 0.146449聽 0.237069聽 0.272395聽 0.355388聽 0.425926 association between depression and suicidal thoughts in the South (173.07239474939345, 2.2908395479186054e-36, 4, array([[ 1211.19606235,聽 1059.70139459,聽聽 832.84003281,聽聽 402.71698113, 聽聽聽聽聽聽聽聽聽 205.54552912], 聽聽聽聽聽聽 [聽 379.80393765,聽聽 332.29860541,聽聽 261.15996719,聽聽 126.28301887, 聽聽聽聽聽聽聽聽聽聽 64.45447088]]))
S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽聽聽聽聽聽 734聽聽聽聽聽聽 745聽聽聽聽聽聽 458聽聽聽聽聽聽 165聽聽聽聽聽聽聽 49 1.000000聽聽聽聽聽聽 194聽聽聽聽聽聽 315聽聽聽聽聽聽 225聽聽聽聽聽聽 115聽聽聽聽聽聽聽 48 S1Q213聽聽聽 0.000000聽 1.000000聽 2.000000聽 3.000000聽 4.000000 S4AQ4A17聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000000聽 0.790948聽 0.702830聽 0.670571聽 0.589286聽 0.505155 1.000000聽 0.209052聽 0.297170聽 0.329429聽 0.410714聽 0.494845 association between depression and suicidal thoughts in the West (73.628402483107678, 3.8855318766982616e-15, 4, array([[ 654.8976378 ,聽 748.0511811 ,聽 481.99901575,聽 197.5984252 , 聽聽聽聽聽聽聽聽聽 68.45374016], 聽聽聽聽聽聽 [ 273.1023622 ,聽 311.9488189 ,聽 201.00098425,聽聽 82.4015748 , 聽聽聽聽聽聽聽聽聽 28.54625984]]))
0 notes
phillust-blog 8 years ago
Text
Data Analysis Tools Week3 Assignment
I made a small adjustment this week to allow for calculation of a correlation coefficient.聽 My selected data elements only has categorical data.聽 I looked into the relationship between quantity of alcoholic beverages consumed and depression rates.聽 I found that the correlation factor is .087 and a p-value of .055
As the p-value is not below .05, we fail to reject the null-hypothesis.
Output:
association between alcohol use and depression (0.08677304304640121, 0.055418860920723026)
Code Follows:
# -*- coding: utf-8 -*- """ Created on Fri Oct 27 19:01:29 2017
@author: 585441 """
# -*- coding: utf-8 -*- """ Created on Sat Oct聽 7 17:29:57 2017
@author: Phil Lust """
import pandas import numpy import scipy.stats import seaborn import matplotlib.pyplot as plt import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi
data = pandas.read_csv('nesarc_pds.csv', low_memory=False)
#set pandas to show all columns in dataframe pandas.set_option('display.max_columns', None)
#set pandas to show all rows in dataframe pandas.set_option('display.max_rows', None)
# sets strings to uppercase in all Dataframe column names data.columns = map(str.upper, data.columns)
# bug fix for display formats to avoid runtime errors pandas.set_option('display.float_format', lambda x:'%f'%x)
# Converts data entries to number format, correct python data read in errors data["S2AQ22"] = pandas.to_numeric(data["S2AQ22"], errors='coerce') data["S1Q213"] = pandas.to_numeric(data["S1Q213"], errors='coerce')
#Replacing unknown values in the dataframe data['S2AQ22']=data['S2AQ22'].replace(99,numpy.nan) data['S1Q213']=data['S1Q213'].replace(9,numpy.nan)
#recode alcohol use from low to high recode1 = {11:0, 1:1, 2:2, 3:3, 4:4, 5:5, 6:6, 7:7, 8:8, 9:9, 10:10} data['S2AQ22'] = data['S2AQ22'].map(recode1)
#recode depression from low to high recode2 = {5:0, 1:1, 2:2, 3:3, 4:4} data['S1Q213'] = data['S1Q213'].map(recode2)
#drop blank and nan entries to enable correlation calculation data_clean=data.dropna()
#Correlation calculation command print ('association between alcohol use and depression') print (scipy.stats.pearsonr(data_clean['S2AQ22'], data_clean['S1Q213']))
0 notes
phillust-blog 8 years ago
Text
Data Analysis Tools Week2 Assignment
I'm looking at the relationship between suicidal thoughts [S4AQ4A17]聽and seeking medical help [S4AQ16]聽from the NESARC data set, both are two level categorical data entries.聽 The Chi-squared test yielded a p-value of 2.43 x 10^-93, effectively 0.聽 This means that we reject the null hypothesis that there is no relationship between suicidal ideation and seeking help; and except the alternate that there is a relationship.聽 Since both variable are only 2 levels there is no need for a post-hoc test to determine the pairwise relationship.
Output from code:
S4AQ4A17聽 1.000000聽 2.000000 S4AQ16聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 1.000000聽聽聽聽聽 2135聽聽聽聽聽 2401 2.000000聽聽聽聽聽 1098聽聽聽聽聽 3138 S4AQ4A17聽 1.000000聽 2.000000 S4AQ16聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 1.000000聽 0.660377聽 0.433472 2.000000聽 0.339623聽 0.566528 chi-square value, p value, expected counts (420.00948971287778, 2.4287382795898561e-93, 1, array([[ 1671.78385773,聽 2864.21614227], 聽聽聽聽聽聽 [ 1561.21614227,聽 2674.78385773]]))
The code follows:
import pandas import numpy import scipy.stats import seaborn import matplotlib.pyplot as plt import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi
data = pandas.read_csv('nesarc_pds.csv', low_memory=False)
#set pandas to show all columns in dataframe pandas.set_option('display.max_columns', None)
#set pandas to show all rows in dataframe pandas.set_option('display.max_rows', None)
# sets strings to uppercase in all Dataframe column names data.columns = map(str.upper, data.columns)
# bug fix for display formats to avoid runtime errors pandas.set_option('display.float_format', lambda x:'%f'%x)
# Converts data entries to number format, correct python data read in errors data["S4AQ16"] = pandas.to_numeric(data["S4AQ16"], errors='coerce') data["S4AQ4A17"] = pandas.to_numeric(data["S4AQ4A17"], errors='coerce')
#Replacing unknown values in the dataframe data['S4AQ16']=data['S4AQ16'].replace(9,numpy.nan) data['S4AQ4A17']=data['S4AQ4A17'].replace(9,numpy.nan)
#contingency table of observed results ct1=pandas.crosstab(data['S4AQ16'], data['S4AQ4A17']) print (ct1)
#column percentages colsum=ct1.sum(axis=0) colpct=ct1/colsum print(colpct)
#Chi-square print('chi-square value, p value, expected counts') cs1 = scipy.stats.chi2_contingency(ct1) print (cs1)
0 notes
phillust-blog 8 years ago
Text
Data Analysis Tools Week 1 Assignment
My study questions does not have any quantitative variables so I did an ANOVA on age of alcohol dependence and house hold income.
What the ANOVA resulted in a relationship between the聽age of alcohol dependence and house hold income (p=0.0146).聽 However the Tukey test showed that there are no differences when taken in a pairwise comparison.
The program I used is as follows:
import pandas import numpy import seaborn import matplotlib.pyplot as plt import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi
data = pandas.read_csv('nesarc_pds.csv', low_memory=False)
#set pandas to show all columns in dataframe pandas.set_option('display.max_columns', None)
#set pandas to show all rows in dataframe pandas.set_option('display.max_rows', None)
# sets strings to uppercase in all Dataframe column names data.columns = map(str.upper, data.columns)
# bug fix for display formats to avoid runtime errors pandas.set_option('display.float_format', lambda x:'%f'%x)
# Converts data entries to number format, correct python data read in errors data["S1Q12B"] = data["S1Q12B"].convert_objects(convert_numeric=True) data["S2BQ2D"] = data["S2BQ2D"].convert_objects(convert_numeric=True) data["S2BQ2FR"] = data["S2BQ2FR"].convert_objects(convert_numeric=True) data["ALCABDEPP12DX"] = data["ALCABDEPP12DX"].convert_objects(convert_numeric=True)
#Replacing unknown values in the dataframe data['S2BQ2D']=data['S2BQ2D'].replace(99,numpy.nan) data['S2BQ2FR']=data['S2BQ2FR'].replace(999,numpy.nan)
#subset of data to only include those who have had alcohol dependence in the #last 12 months sub1 = data[(data['ALCABDEPP12DX']!=0)]
model1 = smf.ols(formula='S2BQ2D ~ C(S1Q12B)', data=sub1) results1 = model1.fit() print (results1.summary())
mc1 = multi.MultiComparison(sub1['S2BQ2D'], sub1['S1Q12B']) res1 = mc1.tukeyhsd() print (res1.summary())
The output is:
聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 OLS Regression Results聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 ============================================================================== Dep. Variable:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 S2BQ2D聽聽 R-squared:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.008 Model:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 OLS聽聽 Adj. R-squared:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.003 Method:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 Least Squares聽聽 F-statistic:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 1.813 Date:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 Thu, 02 Nov 2017聽聽 Prob (F-statistic):聽聽聽聽聽聽聽聽聽聽聽聽 0.0146 Time:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 18:22:33聽聽 Log-Likelihood:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 -16996. No. Observations:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 4671聽聽 AIC:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 3.403e+04 Df Residuals:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 4650聽聽 BIC:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 3.417e+04 Df Model:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 20聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 Covariance Type:聽聽聽聽聽聽聽聽聽聽聽 nonrobust聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 =================================================================================== 聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 coef聽聽聽 std err聽聽聽聽聽聽聽聽聽 t聽聽聽聽聽 P>|t|聽聽聽聽聽 [0.025聽聽聽聽聽 0.975] ----------------------------------------------------------------------------------- Intercept聽聽聽聽聽聽聽聽聽 24.9000聽聽聽聽聽 0.780聽聽聽聽 31.937聽聽聽聽聽 0.000聽聽聽聽聽 23.372聽聽聽聽聽 26.428 C(S1Q12B)[T.2]聽聽聽聽聽 1.8708聽聽聽聽聽 1.025聽聽聽聽聽 1.825聽聽聽聽聽 0.068聽聽聽聽聽 -0.139聽聽聽聽聽聽 3.881 C(S1Q12B)[T.3]聽聽聽聽聽 1.5815聽聽聽聽聽 1.113聽聽聽聽聽 1.421聽聽聽聽聽 0.155聽聽聽聽聽 -0.600聽聽聽聽聽聽 3.763 C(S1Q12B)[T.4]聽聽聽聽聽 0.4700聽聽聽聽聽 1.017聽聽聽聽聽 0.462聽聽聽聽聽 0.644聽聽聽聽聽 -1.523聽聽聽聽聽聽 2.463 C(S1Q12B)[T.5]聽聽聽聽 -1.0440聽聽聽聽聽 1.135聽聽聽聽 -0.920聽聽聽聽聽 0.358聽聽聽聽聽 -3.270聽聽聽聽聽聽 1.182 C(S1Q12B)[T.6]聽聽聽聽聽 0.0695聽聽聽聽聽 0.947聽聽聽聽聽 0.073聽聽聽聽聽 0.941聽聽聽聽聽 -1.787聽聽聽聽聽聽 1.926 C(S1Q12B)[T.7]聽聽聽聽聽 0.2514聽聽聽聽聽 0.936聽聽聽聽聽 0.269聽聽聽聽聽 0.788聽聽聽聽聽 -1.584聽聽聽聽聽聽 2.087 C(S1Q12B)[T.8]聽聽聽聽聽 0.2472聽聽聽聽聽 0.932聽聽聽聽聽 0.265聽聽聽聽聽 0.791聽聽聽聽聽 -1.580聽聽聽聽聽聽 2.075 C(S1Q12B)[T.9]聽聽聽聽 -0.9623聽聽聽聽聽 0.916聽聽聽聽 -1.051聽聽聽聽聽 0.293聽聽聽聽聽 -2.758聽聽聽聽聽聽 0.833 C(S1Q12B)[T.10]聽聽聽 -0.8889聽聽聽聽聽 0.960聽聽聽聽 -0.926聽聽聽聽聽 0.355聽聽聽聽聽 -2.771聽聽聽聽聽聽 0.993 C(S1Q12B)[T.11]聽聽聽 -0.3715聽聽聽聽聽 0.891聽聽聽聽 -0.417聽聽聽聽聽 0.677聽聽聽聽聽 -2.119聽聽聽聽聽聽 1.376 C(S1Q12B)[T.12]聽聽聽 -1.0150聽聽聽聽聽 0.890聽聽聽聽 -1.140聽聽聽聽聽 0.254聽聽聽聽聽 -2.760聽聽聽聽聽聽 0.730 C(S1Q12B)[T.13]聽聽聽 -1.5657聽聽聽聽聽 0.930聽聽聽聽 -1.684聽聽聽聽聽 0.092聽聽聽聽聽 -3.388聽聽聽聽聽聽 0.257 C(S1Q12B)[T.14]聽聽聽 -0.4894聽聽聽聽聽 0.965聽聽聽聽 -0.507聽聽聽聽聽 0.612聽聽聽聽聽 -2.381聽聽聽聽聽聽 1.403 C(S1Q12B)[T.15]聽聽聽 -1.1780聽聽聽聽聽 1.011聽聽聽聽 -1.165聽聽聽聽聽 0.244聽聽聽聽聽 -3.161聽聽聽聽聽聽 0.805 C(S1Q12B)[T.16]聽聽聽 -0.7991聽聽聽聽聽 1.178聽聽聽聽 -0.678聽聽聽聽聽 0.498聽聽聽聽聽 -3.109聽聽聽聽聽聽 1.511 C(S1Q12B)[T.17]聽聽聽 -0.0615聽聽聽聽聽 1.124聽聽聽聽 -0.055聽聽聽聽聽 0.956聽聽聽聽聽 -2.264聽聽聽聽聽聽 2.141 C(S1Q12B)[T.18]聽聽聽 -0.4493聽聽聽聽聽 1.344聽聽聽聽 -0.334聽聽聽聽聽 0.738聽聽聽聽聽 -3.084聽聽聽聽聽聽 2.186 C(S1Q12B)[T.19]聽聽聽 -0.6170聽聽聽聽聽 1.188聽聽聽聽 -0.519聽聽聽聽聽 0.603聽聽聽聽聽 -2.945聽聽聽聽聽聽 1.712 C(S1Q12B)[T.20]聽聽聽 -0.1169聽聽聽聽聽 1.278聽聽聽聽 -0.091聽聽聽聽聽 0.927聽聽聽聽聽 -2.622聽聽聽聽聽聽 2.389 C(S1Q12B)[T.21]聽聽聽聽 0.3235聽聽聽聽聽 1.268聽聽聽聽聽 0.255聽聽聽聽聽 0.799聽聽聽聽聽 -2.163聽聽聽聽聽聽 2.810 ============================================================================== Omnibus:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 1472.119聽聽 Durbin-Watson:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 1.987 Prob(Omnibus):聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.000聽聽 Jarque-Bera (JB):聽聽聽聽聽聽聽聽聽聽聽聽 4157.875 Skew:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 1.670聽聽 Prob(JB):聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 0.00 Kurtosis:聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 6.195聽聽 Cond. No.聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽聽 27.8 ==============================================================================
Tukey Test:
Multiple Comparison of Means - Tukey HSD,FWER=0.05 ========================================= group1 group2 meandiff lower upper reject ----------------------------------------- 聽 1聽聽聽聽聽 2聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 3聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 4聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 5聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 6聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 7聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 8聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 1聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 3聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 4聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 5聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 6聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 7聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 8聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 2聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 4聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 5聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 6聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 7聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 8聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 3聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 5聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 6聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 7聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 8聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 4聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 6聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 7聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 8聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 5聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 7聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 8聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 6聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 8聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 7聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 9聽聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 8聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 10聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 9聽聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 11聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 10聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 12聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 11聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 13聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 12聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 14聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 13聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 14聽聽聽聽 15聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 14聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 14聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 14聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 14聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 14聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 14聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 15聽聽聽聽 16聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 15聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 15聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 15聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 15聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 15聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 16聽聽聽聽 17聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 16聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 16聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 16聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 16聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 17聽聽聽聽 18聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 17聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 17聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 17聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 18聽聽聽聽 19聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 18聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 18聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 19聽聽聽聽 20聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 19聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False 聽 20聽聽聽聽 21聽聽聽聽 nan聽聽聽聽 nan聽聽 nan聽 False -----------------------------------------
0 notes
phillust-blog 8 years ago
Text
Data Analysis Course Week 4 Assignment
I created 4 bar charts for my variables.聽 The first shows the number of service members who reported being depressed in the last 4 weeks.聽 It is a categorical variable and only takes the values of 1-4.聽 I removed those individuals who did not report being depressed.聽
The chart has a negative trend concerning the quantity of people reporting being depressed in the last 4 weeks.
The second chart is similar to the first except it is the remaining records for people who are not service members.
This chart shows the same trend as that of the service members
The next two charts are bivariate charts.聽 The first聽shows the percentage of聽service member聽who reported being depressed and also had suicidal ideation.
There is no real trend shown in the data as there are only 4 categories.聽 Although the chart does show that聽half or more of聽service members with more severe depression thought about suicide.
The second chart shows the percentage of service members who reported being depressed and sought medical help.
It is positive to note that service members with more severe depression did seek medical help at a 50% or higher rate.聽 Although these numbers can be misleading because of low populations in the more severe levels.
0 notes
phillust-blog 8 years ago
Text
Data Analysis Course Week 3 Assignment
Adjusted my script to make three changes to the data structure.
1.聽 Replaced unknown values in all interested data fields with a unknown data identifier.
2.聽 S1Q213 (reports of depression) as recoded to make more high levels of reported depression a higher number.聽 Original code was 1 (high) to 5 (low); recoded to 4 (high) to 0 (low) levels.
3.聽 Mapped S4AQ16 & S4AQ4A17 from 1 (Yes) and 2 (No) to 1 (Yes) and 0 (No) to allow for a binary variable that when multiplied will identify those who sought medical help and had suicidal thoughts.
I would also like to bin my results to combine those who sought help in two secondary variable and those who didn't but I was getting a syntax error even though the code mirrored the example.
Frequency Distributions
Counts for 4wkDprsd for all individuals who answered being depressed a little or more in the last 4 weeks 2.000000聽聽聽聽 49 0.000000聽聽聽 258 1.000000聽聽聽聽 97 4.000000聽聽聽聽聽 8 3.000000聽聽聽聽 15 nan聽聽聽聽聽聽聽聽聽聽 4 Name: 4wkDprsd, dtype: int64
% for 4wkDprsd for all individuals who answered being depressed a little or more in the last 4 weeks 2.000000聽聽 0.113689 0.000000聽聽 0.598608 1.000000聽聽 0.225058 4.000000聽聽 0.018561 3.000000聽聽 0.034803 nan聽聽聽聽聽聽聽 0.009281 Name: 4wkDprsd, dtype: float64
Discussion: ~60% of the surveyed military populace did not report being depressed at all during the past 4 weeks.聽 With up to ~5% reporting being depressed most or all of the time.
Counts for DprsdSawDoc for all individuals who answered being depressed a little or more in the last 4 weeks and saw a doctor 4.000000聽聽聽聽 2 3.000000聽聽聽聽 7 0.000000聽聽聽 25 2.000000聽聽聽聽 4 1.000000聽聽聽聽 3 Name: DprsdSawDoc, dtype: int64
% for DprsdSawDoc for all individuals who answered being depressed a little or more in the last 4 weeks and saw a doctor 4.000000聽聽 0.048780 3.000000聽聽 0.170732 0.000000聽聽 0.609756 2.000000聽聽 0.097561 1.000000聽聽 0.073171 Name: DprsdSawDoc, dtype: float64
Discussion:聽 ~39% of those who stated they were depressed at least a little sought medical help.聽 Later I will probably explore the ratio of those who sought help within each level of reported depression.
Counts for SuicideSawDoc for all individuals who answered being depressed a little or more in the last 4 weeks, thought of suicide, and saw a doctor 0.000000聽聽聽 63 1.000000聽聽聽聽 9 3.000000聽聽聽聽 6 4.000000聽聽聽聽 2 2.000000聽聽聽聽 2 Name: SuicideSawDoc, dtype: int64
% for SuicideSawDoc for all individuals who answered being depressed a little or more in the last 4 weeks, thought of suicide, and saw a doctor 0.000000聽聽 0.768293 1.000000聽聽 0.109756 3.000000聽聽 0.073171 4.000000聽聽 0.024390 2.000000聽聽 0.024390 Name: SuicideSawDoc, dtype: float64
Discussion:聽 ~24% of those who stated they were thought of suicide sought medical help.聽 Later I will probably explore the ratio of those who sought help within each level of reported depression for those who thought about suicide.
0 notes
phillust-blog 8 years ago
Text
Military Service and depression frequency distributions
Code:
Spyder Editor
"""
import pandas import numpy
data = pandas.read_csv('nesarc_pds.csv', low_memory=False)
#print(len(data)) #number of observations (rows) #print(len(data.columns)) #number of variables (columns)
# sets strings to uppercase all Dataframe column names data.columns = map(str.upper, data.columns)
# bug fix for display formats to avoid runtime errors pandas.set_option('display.float_format', lambda x:'%f'%x)
# Converts data entries to number format, correct python data read in errors
data["S1Q9A"] = data["S1Q9A"].convert_objects(convert_numeric=True) data["S1Q9B"] = data["S1Q9B"].convert_objects(convert_numeric=True) data["S1Q9C"] = data["S1Q9C"].convert_objects(convert_numeric=True) data["S1Q213"] = data["S1Q213"].convert_objects(convert_numeric=True) data["S4AQ1"] = data["S4AQ1"].convert_objects(convert_numeric=True) data["S4AQ2"] = data["S4AQ2"].convert_objects(convert_numeric=True) data["S4AQ4A12"] = data["S4AQ4A12"].convert_objects(convert_numeric=True) data["S4AQ4A17"] = data["S4AQ4A17"].convert_objects(convert_numeric=True) data["S4AQ4A18"] = data["S4AQ4A18"].convert_objects(convert_numeric=True) data["S4AQ51"] = data["S4AQ51"].convert_objects(convert_numeric=True) data["S4AQ16"] = data["S4AQ16"].convert_objects(convert_numeric=True)
# Frequency and normalized counts for variables print("counts S1Q9A - Business or Industry: Current or most recent job, 14 = Armed Forces") c1 = data["S1Q9A"].value_counts(sort=False) print(c1)
print("% for S1Q9A - Business or Industry: Current or most recent job, 14 = Armed Forces") p1 = data["S1Q9A"].value_counts(sort=False, normalize=True) print(p1)
print("counts S1Q9B - Occupation: Current or most recent job, 14 = Military") c2 = data["S1Q9B"].value_counts(sort=False) print(c2)
print("% S1Q9B - Occupation: Current or most recent job, 14 = Military") p2 = data["S1Q9B"].value_counts(sort=False, normalize=True) print(p2)
print("counts S1Q9C - Type of Employer:聽 Current or most recnet job, 6 = Armed Forces") c3 = data["S1Q9C"].value_counts(sort=False) print (c3)
print("% S1Q9C - Type of Employer:聽 Current or most recnet job, 6 = Armed Forces") p3 = data["S1Q9C"].value_counts(sort=False, normalize=True) print (p3)
print("counts S1Q213 - During past 4 weeks, how often felt downhearted or depressed, 1 - 3 = All to Some of the time") c4 = data["S1Q213"].value_counts(sort=False) print (c4)
print("% S1Q213 - During past 4 weeks, how often felt downhearted or depressed, 1 - 3 = All to Some of the time") p4 = data["S1Q213"].value_counts(sort=False, normalize=True) print (p4)
print("counts S4AQ1 - Ever had 2-week period felt sad, blue, depressed or down most of the time, 1 = Yes") c5 = data["S4AQ1"].value_counts(sort=False) print (c5)
print("% S4AQ1 - Ever had 2-week period felt sad, blue, depressed or down most of the time, 1 = Yes") p5 = data["S4AQ1"].value_counts(sort=False, normalize=True) print (p5)
print("counts S4AQ2 - Ever had 2-week period when didn't care about things, 1 = Yes") c6 = data["S4AQ2"].value_counts(sort=False) print (c6)
print("% S4AQ2 - Ever had 2-week period when didn't care about things, 1 = Yes") p6 = data["S4AQ2"].value_counts(sort=False, normalize=True) print (p6)
print("counts S4AQ4A12 - Felt worthless most ofg the time for 2+ weeks, 1 = Yes") c7 = data["S4AQ4A12"].value_counts(sort=False) print (c7)
print("% S4AQ4A12 - Felt worthless most ofg the time for 2+ weeks, 1 = Yes") p7 = data["S4AQ4A12"].value_counts(sort=False, normalize=True) print (p7)
print("counts S4AQ4A17 - Thought about committing suicide, 1 = Yes") c8 = data["S4AQ4A17"].value_counts(sort=False) print (c8)
print("% S4AQ4A17 - Thought about committing suicide, 1 = Yes") p8 = data["S4AQ4A17"].value_counts(sort=False, normalize=True) print (p8)
print("counts S4AQ4A18 - Felt like wanted to die, 1 = Yes") c9 = data["S4AQ4A18"].value_counts(sort=False) print (c9)
print("% S4AQ4A18 - Felt like wanted to die, 1 = Yes") p9 = data["S4AQ4A18"].value_counts(sort=False, normalize=True) print (p9)
print("counts S4AQ51 - Felt uncomfortable or upset by low mood, 1 = Yes") c10 = data["S4AQ51"].value_counts(sort=False) print (c10)
print("% S4AQ51 - Felt uncomfortable or upset by low mood, 1 = Yes") p10 = data["S4AQ51"].value_counts(sort=False, normalize=True) print (p10)
print("counts S4AQ16 - Went to counselor/therapist/doctor to help mood, 1 = Yes") c11 = data["S4AQ16"].value_counts(sort=False) print (c11)
print("% S4AQ16 - Went to counselor/therapist/doctor to help mood, 1 = Yes") p11 = data["S4AQ16"].value_counts(sort=False, normalize=True) print (p11)
OUTPUTS
counts S1Q213 - During past 4 weeks, how often felt downhearted or depressed, 1 - 3 = All to Some of the time 1聽聽聽聽聽 907 2聽聽聽聽 2051 3聽聽聽聽 6286 4聽聽聽 11305 5聽聽聽 22127 9聽聽聽聽聽 417 Name: S1Q213, dtype: int64
% S1Q213 - During past 4 weeks, how often felt downhearted or depressed, 1 - 3 = All to Some of the time 1聽聽 0.021048 2聽聽 0.047595 3聽聽 0.145871 4聽聽 0.262340 5聽聽 0.513471 9聽聽 0.009677 Name: S1Q213, dtype: float64
counts S4AQ1 - Ever had 2-week period felt sad, blue, depressed or down most of the time, 1 = Yes 1聽聽聽 12785 2聽聽聽 29416 9聽聽聽聽聽 892 Name: S4AQ1, dtype: int64
% S4AQ1 - Ever had 2-week period felt sad, blue, depressed or down most of the time, 1 = Yes 1聽聽 0.296684 2聽聽 0.682617 9聽聽 0.020699 Name: S4AQ1, dtype: float64
counts S4AQ2 - Ever had 2-week period when didn't care about things, 1 = Yes 1聽聽聽 10533 2聽聽聽 31618 9聽聽聽聽聽 942 Name: S4AQ2, dtype: int64
% S4AQ2 - Ever had 2-week period when didn't care about things, 1 = Yes 1聽聽 0.244425 2聽聽 0.733715 9聽聽 0.021860 Name: S4AQ2, dtype: float64
The output shows the count and normalized data for 3 questions from the NESARC data set.聽 The final number (9)聽in each output set shows missing data.聽 For S1Q213, the scale is a likert scale with 1-3 the values that are of interest.聽 S4AQ1 and S4AQ2 are yes/no questions.聽 There is not much to say about the distribution of the data until it is charted.
0 notes
phillust-blog 8 years ago
Text
Week 1 Assignment
Data Set: 聽National Epidemiological Survey of Drug Use and Health
Topic of Interest: 聽What is the association between military service and depression in comparison to the general population?
Secondary Topic: 聽Does this the association change as based upon age of service member?
Second Topic variables are the same as first topic.
Literature Search:
聽 聽 聽 聽 聽 聽 聽 聽 聽Terms: 聽Military service and depression (2003-2010)
The literature search revealed that most efforts have tried to measure the impact of the Global War on Terror (OIF & OEF) deployments on service member mental health. 聽There were no direct comparisons of military service to the general US population. 聽Closest parallel is a 21-year study started in 2003 to understand the impact of service on mental health. (Millennium Cohort: enrollment begins a 21-year contribution to understanding the impact of military service - https://doi.org/10.1016/j.jclinepi.2006.05.009)
聽Hypothesis: 聽There is no significant difference in depression rates between service members and the general population; as the services are a reflection of the population due to an all-volunteer force. 聽The lack of association may not hold across ages.
0 notes