#datamodule
Explore tagged Tumblr posts
anne-blm · 6 years ago
Photo
Tumblr media
Ibeacon module, Bluetooth indoor positioning solution CC2640R2 BEACON is a broadcast protocol device (slave) based on the low-power Bluetooth protocol. As a Beacon device, it is usually placed in a fixed location indoors for continuous broadcast to the surroundings, but it cannot be connected to any Bluetooth low-power host. Bluetooth beacon terminals are generally used for user terminals such as indoor positioning navigation and precise location marketing, such as indoor positioning navigation of some large supermarkets and location-based push merchandise coupons.#TIbluetooth #datamodule #bluetoothboard #ibeaconmodule https://www.instagram.com/p/Bz7wFdLpmVt/?igshid=1dsy211xlkddb
0 notes
endangeredaudio-blog · 8 years ago
Photo
Tumblr media
@audulus - #ModularGrid dreams - putting together my ideal rig for use with Audulus. Once Taylor adds MIDI out and arbitrary audio I/O, I'll be able to make Audulus my sequencer/envelope/modulation brain. The heart of this rig is the @qubitelectronix Chord - all four voices are individually enveloped by the @macromachines #omnimod which is itself triggered by MIDI out in #Audulus through the @expertsleepers FH-1X. The Chord Mix output is enveloped by Audulus separately through the #ExpertSleepers ES8. Each voice has its own filter - two @twohpmodular #Filt's, one @stgsoundlabs #SeaDevilsFilter and a #MankatoFilter, and finally the Chord Mix into the #MutableInstruments #ShelvesEQ. The Expert Sleepers Disting Mk4s are mostly there to be drum samplers. They have a submix and get some tube distortion from the L-1 TubeVCA. Each voice can be mixed separately on the #QuBitElectronix Mixology and a companion 2hp Mix module with a MI Clouds on the send. Cap it off with a @mordaxsystems #DATAmodule as a master clock, modulation source, and just awesome eye candy, and the plugins available in my @uaudio #Apollo8DUO and I've got a seriously powerful compact 7U system. Buy an Expert Sleepers #ES8 from @centurysoundlabs and you'll receive a free copy of #Audulus3! Let me know what you think of the rig in the comments below! 👌😎🎹🎛🔥 #eurorack #eurocrack #modularsynth #synth #synthesizer #analogsynth (at Asheville, North Carolina)
0 notes
fabianocatrinck · 3 years ago
Text
Dica de Object Pascal (Delphi) #14 – Exportar os dados contidos na tela para Excel em Delphi parte 2.
Nesse artigo eu venho trazer um outro exemplo mais atual de como exportar dados para o excel: No exemplo abaixo temos uma tela com alguns dados em um DBGrid conectado via DataSource a um ClientDataSet. DM = DataModule. //Para começar coloque na uses da sua unit: Vcl.OleAuto procedure TfrmConsulta.btnRelatorioClick(Sender: TObject); var Planilha : Variant; begin Try Planilha :=…
Tumblr media
View On WordPress
0 notes
wiadomosciprasowe · 4 years ago
Text
Fraktskip nummer to i Norge: Nye Mustang Mach-E først ute med unike programvareoppdateringer som gjør bilen stadig bedre
https://www.y6.no/fraktskip-nummer-to-i-norge-nye-mustang-mach-e-forst-ute-med-unike-programvareoppdateringer-som-gjor-bilen-stadig-bedre/
Fraktskip nummer to i Norge: Nye Mustang Mach-E først ute med unike programvareoppdateringer som gjør bilen stadig bedre
Tumblr media
SKIP NUMMER 2: Det andre fraktskipet fylt opp med Mustang Mach-E kom tidligere denne uken til Drammen. Bildet er tatt etter at alle bilene var kjørt i land og skipet hadde forlatt Drammen havn. Dato: 13-05-2021 14:00 CEST Opprinnelig tittel på pressemeldingen: Fraktskip nummer to i Norge: Nye Mustang Mach-E først ute med unike programvareoppdateringer som gjør bilen stadig bedre Kategori: , Motor Motor, biler, motorsykler Økonomi, finans Vitenskap, teknikk Data, telekom, IT Miljø, energi Media, kommunikasjon Industri, produksjon Transport Biltrafikk Med den nye Ford Power-Up-teknologien får Mustang Mach-E som den første modellen til Ford en trådløs programvareoppdatering. Denne uken kom nok et fullt bilfrakteskip fylt opp med nye kundesolgte Mustang Mach-E til Drammen og Norge.
Tumblr media
Med den nye Ford Power-Up-teknologien får Mustang Mach-E som den første modellen til Ford en trådløs programvareoppdatering. Denne uken kom nok et fullt bilfrakteskip fylt opp med nye kundesolgte Mustang Mach-E til Drammen og Norge.
– De mange tusen kundene som får sin nye helelektriske SUV i løpet av de neste ukene er også blant de første i Europa som får en programvareoppdatering med flere forbedringer trådløst via nettet. I fortsettelsen kommer disse forbedringene av bilen, både når det gjelder funksjoner og bilen ellers, i stor grad til å skje på denne måten, sier administrerende direktør i Ford Motor Norge, Per Gunnar Berg.
Tre fulle skip på to uker
Tirsdag denne uken kom skip nummer to fylt med kundesolgte helelektriske Mustang Mach-E til Drammen. Senere denne uken ankommer det også etter planen et tredje bilfrakteskip med nye Mustang Mach-E.
– Forhandlere over hele Norge arbeider på spreng med å få ut biler til kundene så fort det praktisk lar seg gjøre. De tre skipene som nå har ankommet eller er på vei er bare starten på en storstilt nasjonal utrulling av vår nye stolthet som vi vet mange venter på, sier Ford Motor Norges administrerende direktør.
Unik teknologi forbedrer Fords modeller – først ut på Mustang Mach-E
Med systemet Power-Up skal Ford trådløst oppdatere en rekke av sine modeller, med både ny funksjonalitet og kvalitetsforbedringer. Mer enn 6 millioner Ford-kjøretøy er allerede forberedt for dette, da de er utstyrt med modem. Ifølge Fords prognoser skal hele 33 millioner av deres kjøretøy – både personbiler og nyttekjøretøy – motta denne type trådløse oppdateringer i løpet av 2028.
– Vi har investert i trådløs teknologi som gjør at oppdateringen kan skje om natten slik at du våkner opp til en bil som har fått en ny underholdningsfunksjon, eller på annen måte er forbedret. Med Ford Power-Up-teknologien kan vi gjøre dette raskt for millioner av biler, sier Mark Harvey, Director, Enterprise Connectivity Ford of Europe.
Kan oppdatere nesten alle kjøretøyets datamoduler
Fords nye andre generasjons elektriske arkitektur og avanserte teknologi er selve byggesteinen i Ford Power-Up-programvareoppdateringer. Det gjør det mulig å oppdatere nesten alle kjøretøyets datamoduler, mer enn 110 på de mest avanserte modellene. Dette gjør at Ford kontinuerlig kan levere ny funksjonalitet og kvalitetshevinger til sine biler i mange år fremover. Det kan også redusere behovet for reparasjoner.
Mange av Ford Power-Up oppdateringene vil skje nærmest uten at kundene registrerer dem. De oppdateres i bakgrunnen og den nye programvaren aktiveres først når dette er installert. Alle oppdateringer som krever at kjøretøyet står parkert, kan planlegges for å skje når det passer best for kunden.
I nye Mustang Mach-E kan det aller meste oppdateres og forbedres i hele bilens levetid. Det kan dreie seg om alt fra bedre ytelse til nye funksjoner og tjenester.
Tegneprogram og infotainment-oppdateringer i løpet av de neste ukene
Den første oppdateringen av nye Mustang Mach-E i Europa installerer programmet Ford Sketch på den 15,5 tommer store berøringsskjermen i bilen. Dette er et program som både gir enkle spill- og tegnemuligheter i bilen når kjøretøyet er parkert. Nye funksjoner og forbedringer vil følge i tiden fremover. Power-Up-oppdateringer som Mach-E kundene får i løpet av de neste ukene inkluderer nytt fjernstyrt kjøretøyoppsett og nye turlogger som gjør at du ved hjelp av FordPass-appen på mobiltelefonen kan sette opp dine egne profiler, inkludert dine egne radioinnstillinger, og sende dette til bilen. Du kan også overvåke batteribruken og andre detaljer fra hver og en av turene de har foretatt i appen.
Infotainmentsystemet i Mustang Mach-E, SYNC4A, blir også forbedret, både når det gjelder navigasjon, Apple CarPlay, stemmstyring av internettsøk og den digitale brukermanualen.
Kilde: Pressekontor Ford Motor Norge AS – PRESSEMELDING –
————
Om Ford Motor Company
Ford Motor Company er et global selskap med hovedkontor i Dearborn, Michigan, USA. Selskapet utvikler, produserer, markedsfører og har service på en rekke kjøretøy som inkluderer personbiler, SUVer, nyttekjøretøy, lastebiler, elektrifiserte kjøretøy og Lincoln luksusbiler.Konsernet leverer også finansielle tjenester gjennom Ford Motor Credit Company. Ford Motor Company har som mål å ha en ledende rolle innen elektrifisering, mobilitetsløsninger, inkludert selvkjørende biler, og sammenkoblede tjenester. Ford har cirka 188 000 ansatte over hele verden. For mer informasjon om Ford, produktene og Ford Motor Credit Company, vennligst se www.corporate.ford.com
Ford Europa er ansvarlig for å produsere, selge og vedlikeholde Ford kjøretøy i 50 markeder og har ca. 43.000 ansatte ved sine heleide anlegg og konsoliderte fellesforetak. De har rundt 55.000 ansatte når ikke-konsoliderte virksomheter er inkludert. I tillegg til Ford Motor Credit Company inkluderer Ford Europas virksomhet Ford Servicemarked og 19 produksjonsenheter (hvorav 12 er heleide anlegg og 7 er ukonsoliderte fellesforetak). Den første Ford-bilen ble sendt til Europa i 1903 – samme år som Ford Motor Company ble etablert. Produksjon i Europa startet i 1911.
Ford Motor Norge er en bilprodusent og mobilitetsselskap som importerer biler og deler for salg gjennom et forhandlernettverk på 82 salgs- og servicepunkter. Den første Ford-bilen kom til Norge allerede i 1906, trolig gjennom firmaet E.C. Gjestvang i Oslo. Lenge var det kun private forhandlere som importerte Ford til Norge via Danmark. Først i 1960 ble Ford Motor Norge etablert etter sterkt engasjement fra blant annet Trygve Lie, FNs første generalsekretær og norsk ambassadør i Washington.
Hashtags: # #Motor Motor, biler, motorsykler Økonomi, finans Vitenskap, teknikk Data, telekom, IT Miljø, energi Media, kommunikasjon Industri, produksjon Transport Biltrafikk Motor Motor, biler, motorsykler Økonomi, finans Vitenskap, teknikk Data, telekom, IT Miljø, energi Media, kommunikasjon Industri, produksjon Transport Biltrafikk
0 notes
galactissolutions · 8 years ago
Text
20463-Implementing-a-Data-Warehouse-with-Microsoft-SQL-Server-2014-training-thailand
20463: Implementing a Data Warehouse with Microsoft SQL Server 2014 20463: Implementing A Data Warehouse With Microsoft SQL Server 2014 Course Description Duration: 5.00 days (40 hours) This course describes how to implement a data warehouse platform to support a BI solution. Students will learn how to create a data warehouse with Microsoft SQL Server 2014, implement ETL with SQL Server Integration Services, and validate and cleanse data with SQL Server Data Quality Services and SQL Server Master Data Services.This course is designed for customers who are interested in learning SQL Server 2012 or SQL Server 2014. It covers the new features in SQL Server 2014, but also the important capabilities across the SQL Server data platform. Plementing A Data Warehouse With Microsoft SQL Server 2014 Course » This course is intended for database professionals who need to create and support a data warehousing solution.20463: Implementing A Data Warehouse With Microsoft SQL Server 2014 Course Objectives » Describe data warehouse concepts and architecture considerations.» Select an appropriate hardware platform for a data warehouse.» Design and implement a data warehouse.» Implement Data Flow in an SSIS Package.» Implement Control Flow in an SSIS Package.» Debug and Troubleshoot SSIS packages.» Implement an ETL solution that supports incremental data extraction.» Implement an ETL solution that supports incremental data loading.» Implement data cleansing by using Microsoft Data Quality Services.» Implement Master Data Services to enforce data integrity.» Extend SSIS with custom scripts and components.» Deploy and Configure SSIS packages.» Describe how BI solutions can consume data from the data warehouse.20463: Implementing A Data Warehouse With Microsoft SQL Server 2014 Course Outline Module 1: Introduction to Data WarehousingOverview of Data WarehousingConsiderations for a Data Warehouse SolutionLab : Exploring a Data Warehousing SolutionModule 2: Planning Data Warehouse InfrastructureConsiderations for Data Warehouse InfrastructurePlanning Data Warehouse HardwareLab : Planning Data Warehouse InfrastructureModule 3: Designing and Implementing a Data WarehouseData Warehouse Design OverviewDesigning Dimension TablesDesigning Fact TablesPhysical Design for a Data WarehouseLab : Implementing a Data WarehouseModule 4: Creating an ETL Solution with SSISIntroduction to ETL with SSISExploring Data SourcesImplementing Data FlowLab : Implementing Data Flow in an SSIS PackageModule 5: Implementing Control Flow in an SSIS PackageIntroduction to Control FlowCreating Dynamic PackagesUsing ContainersManaging ConsistencyLab : Implementing Control Flow in an SSIS PackageLab : Using Transactions and CheckpointsModule 6: Debugging and Troubleshooting SSIS PackagesDebugging an SSIS PackageLogging SSIS Package EventsHandling Errors in an SSIS PackageLab : Debugging and Troubleshooting an SSIS PackageModule 7: Implementing a Data Extraction SolutionPlanning Data ExtractionExtracting Modified DataLab : Extracting Modified DataModule 8: Loading Data into a Data WarehousePlanning Data LoadsUsing SSIS for Incremental LoadsUsing Transact-SQL Loading TechniquesLab : Loading a Data WarehouseModule 9: Enforcing Data QualityIntroduction to Data QualityUsing Data Quality Services to Cleanse DataUsing Data Quality Services to Cleanse DataLab : Cleansing DataModule 10: Master Data ServicesIntroduction to Master Data ServicesImplementing a Master Data Services ModelManaging Master DataCreating a Master Data HubLab : Implementing Master Data ServicesModule 11: Extending SQL Server Integration ServicesUsing Scripts in SSISUsing Custom Components in SSISLab : Using Custom ScriptsModule 12: Deploying and Configuring SSIS PackagesOverview of SSIS DeploymentDeploying SSIS ProjectsPlanning SSIS Package ExecutionLab : Deploying and Configuring SSIS PackagesModule 13: Consuming Data in a Data WarehouseIntroduction to Business IntelligenceEnterprise Business IntelligenceSelf-Service BI and Big DataLab : Using a Data Warehouse
0 notes
sascourserablog · 8 years ago
Text
Week 4: Multivariate Analysis
CODE
Data Management
import pandas as pd import numpy as np # initialize 'data' for relevant variables full_data = pd.read_csv('../ool_pds.csv', low_memory=False) full_data.columns = map(str.upper, full_data.columns) data = full_data[['PPAGE', 'W1_A11', 'W1_C2', 'W1_F1']].copy() # "How many days in the past week did you watch national news programs on television or on the Internet?" data['W1_A11'] = pd.to_numeric(data['W1_A11'], errors='coerce') # "We hear a lot of talk these days about liberals and conservatives. Where would you place YOURSELF on this 7 point scale?" data['W1_C2'] = pd.to_numeric(data['W1_C2'], errors='coerce') # "When you think about your future, are you generally optimistic, pessimistic, or neither optimistic nor pessimistic?" data['W1_F1'] = pd.to_numeric(data['W1_F1'], errors='coerce') #convert NA to numpy.nan data['W1_A11'] = data['W1_A11'].replace(-1, np.nan) data['W1_C2'] = data['W1_C2'].replace(-1, np.nan) data['W1_F1'] = data['W1_F1'].replace(-1, np.nan) #recode A11 into more logical categories (days last week watched news) recode_news = {1: 0, 2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 7} data['NEWSFREWEEK'] = data['W1_A11'].map(recode_news) # consolidate liberal/conservative categories (1: liberal, 2: moderate, 3: conservative) recode_political = {1:1, 2:1, 3:1, 4:2, 5:3, 6:3, 7:3} data['POLORIENT'] = data['W1_C2'].map(recode_political) # recode to moderates and non-moderates only data['POLORIENT2'] = data['POLORIENT'].map({1:0, 2:1, 3:0}) data['POLORIENT2'] = data['POLORIENT2'].astype('category') data['POLORIENT2'] = data['POLORIENT2'].cat.rename_categories(['Non-moderate', 'Moderate']) # Define as categorical varibales; rename categories data['POLORIENT'] = data['POLORIENT'].astype('category') data['POLORIENT'] = data['POLORIENT'].cat.rename_categories(["Liberal", "Moderate", "Conservative"]) #Reassign W1_F1 data['OPTIMISM'] = data['W1_F1'] data['OPTIMISM'] = data['OPTIMISM'].astype('category') data['OPTIMISM'] = data['OPTIMISM'].cat.rename_categories(["Optimistic", "Pessimistic", "Neither"]) # cut data into 18-34, 35 - 55, 56 - 120; make categorical data['AGEGRP'] = pd.cut(data.PPAGE, [17,35,55,120]) data['AGEGRP'] = data['AGEGRP'].astype('category') data['AGEGRP'] = data['AGEGRP'].cat.rename_categories(["18 - 35 y/o", "36 - 55 y/o", "> 55 y/o"])
Data Analysis
import matplotlib matplotlib.use('Agg') import datamodule import pandas as pd import seaborn as sb import scipy import matplotlib.pyplot as plt import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi data = datamodule.data # Does age category moderate the relationship between political orientation (moderate/non-moderate) and news-viewing? # Bivariate relationship model_polvnews = smf.ols(formula='NEWSFREWEEK ~ C(POLORIENT2)', data=data).fit() print ('Bivariate Analysis: Means comparison') print (data[['POLORIENT2', 'NEWSFREWEEK']].groupby('POLORIENT2').mean()) print (model_polvnews.summary()) sub1 = data[['POLORIENT2', 'AGEGRP', 'NEWSFREWEEK']].dropna() sub_age1 = sub1[(sub1['AGEGRP'] == '18 - 35 y/o')] sub_age2 = sub1[(sub1['AGEGRP'] == '36 - 55 y/o')] sub_age3 = sub1[(sub1['AGEGRP'] == '> 55 y/o')] titles = { id(sub_age1) : '18 - 35 y/o', id(sub_age2) : '36 - 55 y/o', id(sub_age3) : '> 55 y/o' } count = 0 for grp in [sub_age1, sub_age2, sub_age3]: print (titles[id(grp)]) print (grp.groupby('POLORIENT2').mean()) model = smf.ols(formula='NEWSFREWEEK ~ C(POLORIENT2)', data=grp).fit() print(model.summary()) sb.factorplot(x="POLORIENT2", y = "NEWSFREWEEK", data = grp, kind='bar', ci=None) plt.title(titles[id(grp)]) plt.xlabel('Political Orientation') plt.ylabel('News-viewing frequency per week') count += 1 plt.savefig('subgroup_' + str(count))
OUTPUT
Bivariate Analysis: Means comparison NEWSFREWEEK POLORIENT2 Non-moderate 3.376566 Moderate 3.027586 OLS Regression Results ============================================================================== Dep. Variable: NEWSFREWEEK R-squared: 0.004 Model: OLS Adj. R-squared: 0.004 Method: Least Squares F-statistic: 9.563 Date: Mon, 19 Jun 2017 Prob (F-statistic): 0.00201 Time: 16:50:51 Log-Likelihood: -5285.5 No. Observations: 2227 AIC: 1.057e+04 Df Residuals: 2225 BIC: 1.059e+04 Df Model: 1 Covariance Type: nonrobust ============================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------------------- Intercept 3.3766 0.071 47.870 0.000 3.238 3.515 C(POLORIENT2)[T.Moderate] -0.3490 0.113 -3.092 0.002 -0.570 -0.128 ============================================================================== Omnibus: 14.581 Durbin-Watson: 1.990 Prob(Omnibus): 0.001 Jarque-Bera (JB): 197.680 Skew: 0.199 Prob(JB): 1.19e-43 Kurtosis: 1.596 Cond. No. 2.44 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. 18 - 35 y/o NEWSFREWEEK POLORIENT2 Non-moderate 2.146417 Moderate 1.795349 OLS Regression Results ============================================================================== Dep. Variable: NEWSFREWEEK R-squared: 0.007 Model: OLS Adj. R-squared: 0.005 Method: Least Squares F-statistic: 3.500 Date: Mon, 19 Jun 2017 Prob (F-statistic): 0.0619 Time: 16:50:51 Log-Likelihood: -1164.7 No. Observations: 536 AIC: 2333. Df Residuals: 534 BIC: 2342. Df Model: 1 Covariance Type: nonrobust ============================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------------------- Intercept 2.1464 0.119 18.060 0.000 1.913 2.380 C(POLORIENT2)[T.Moderate] -0.3511 0.188 -1.871 0.062 -0.720 0.018 ============================================================================== Omnibus: 67.826 Durbin-Watson: 2.057 Prob(Omnibus): 0.000 Jarque-Bera (JB): 92.798 Skew: 1.019 Prob(JB): 7.07e-21 Kurtosis: 3.050 Cond. No. 2.45 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. 36 - 55 y/o NEWSFREWEEK POLORIENT2 Non-moderate 2.837782 Moderate 2.891496 OLS Regression Results ============================================================================== Dep. Variable: NEWSFREWEEK R-squared: 0.000 Model: OLS Adj. R-squared: -0.001 Method: Least Squares F-statistic: 0.09607 Date: Mon, 19 Jun 2017 Prob (F-statistic): 0.757 Time: 16:50:51 Log-Likelihood: -1917.3 No. Observations: 828 AIC: 3839. Df Residuals: 826 BIC: 3848. Df Model: 1 Covariance Type: nonrobust ============================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------------------- Intercept 2.8378 0.111 25.517 0.000 2.619 3.056 C(POLORIENT2)[T.Moderate] 0.0537 0.173 0.310 0.757 -0.286 0.394 ============================================================================== Omnibus: 440.204 Durbin-Watson: 1.859 Prob(Omnibus): 0.000 Jarque-Bera (JB): 68.978 Skew: 0.410 Prob(JB): 1.05e-15 Kurtosis: 1.847 Cond. No. 2.46 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. > 55 y/o NEWSFREWEEK POLORIENT2 Non-moderate 4.573770 Moderate 4.019108 OLS Regression Results ============================================================================== Dep. Variable: NEWSFREWEEK R-squared: 0.011 Model: OLS Adj. R-squared: 0.010 Method: Least Squares F-statistic: 9.499 Date: Mon, 19 Jun 2017 Prob (F-statistic): 0.00212 Time: 16:50:51 Log-Likelihood: -2029.2 No. Observations: 863 AIC: 4062. Df Residuals: 861 BIC: 4072. Df Model: 1 Covariance Type: nonrobust ============================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------------------- Intercept 4.5738 0.109 42.134 0.000 4.361 4.787 C(POLORIENT2)[T.Moderate] -0.5547 0.180 -3.082 0.002 -0.908 -0.201 ============================================================================== Omnibus: 570.890 Durbin-Watson: 1.766 Prob(Omnibus): 0.000 Jarque-Bera (JB): 82.817 Skew: -0.476 Prob(JB): 1.04e-18 Kurtosis: 1.818 Cond. No. 2.42 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Summary
Bivariate analysis:
Among all the subgroups, Non-moderates (conservatives and liberals) view more news on average than moderates. The p-value is < .05, which means the difference is significantly significant.
Subgroup 1: 18 - 35 y/o
The mean values for non-moderates and moderates are 2.14 and 1.79 days per week, respectively.
The P value is > .05 (~.06) for this group. Therefore, the difference in mean values is not statistically significant. We cannot reject the null hypothesis.
Subgroup 2: 36 - 55 y/o
The mean values for non-moderates and moderates are approximately 2.15 and 1.8, respectively.
The P value is > .05 (~.76) for this group. Therefore, the difference in mean values is not statistically significant. We cannot reject the null hypothesis.
Subgroup 3: > 55 y/o
The mean values for non-moderates and moderates are approximately 4.6 and 4.01, respectively.
The P value is < .05 (~.002) for this group. Therefore, the difference in mean values is statistically significant. We can reject the null hypothesis.
Multivariate analysis:
We observe the following trends:
The bivariate analysis indicates that there is more news-viewing among non-moderates (liberals and conservatives) versus moderates.
However, within the 18 - 35 and 36 - 55 subgroups, the difference in news-viewing between moderates and non-moderates is not statistically significant.
In the > 55 year-old subgroup, the difference between moderate and non-moderate news-viewing is statistically significant, and matches the overall pattern.
In conclusion, we can only confirm that the relationship between political ideology and news-viewing only holds for individuals older than 55 years-old. More observations of individuals 55 years old and below will need to be made in order to confirm if this relationship holds for this sub-group as well.
Age does have a moderating effect on the relationship.
0 notes
galactissolutions · 8 years ago
Text
20464: Developing Microsoft SQL Server Databases (SQL Server 2014 Update)thailand
20464: Developing Microsoft SQL Server Databases (SQL Server 2014 Update) 20464: Developing Microsoft SQL Server Databases (SQL Server 2014 Update) Course DescriptionDuratio: 5.00 days (40 hours).This 5-day instructor-led course introduces SQL Server 2014 and describes logical table design, indexing and query plans. It also focusses on the creation of database objects including views, stored procedures, along with parameters, and functions. Other common aspects of procedure coding, such as indexes, concurrency, error handling, and triggers are also covered in this course. Also this course helps you prepare for the Exam 70-464.Note: This course is designed for customers who are interested in learning SQL Server 2012 or SQL Server 2014. It covers the new features in SQL Server 2014, but also the important capabilities across the SQL Server data platform.Intended Audience For This 20464: Developing Microsoft SQL Server Databases (SQL Server 2014 Update) Course » The primary audience for this course is IT Professionals who want to become skilled on SQL Server 2014 product features and technologies for implementing a database.20464: Developing Microsoft SQL Server Databases (SQL Server 2014 Update) Course Objectives » Introduce the entire SQL Server platform and its major tools. It will cover editions, versions, basics of network listeners, and concepts of services and service accounts.» Determine appropriate data types to be used when designing tables, convert data between data types, and create alias data types.» Be aware of good design practices regarding SQL Server tables and be able to create tables using T-SQL. (Note: partitioned tables are not covered).» Implement PRIMARY KEY, FOREIGN KEY, DEFAULT, CHECK and UNIQUE constraints, and investigate cascading FOREIGN KEY constraints.» Determine appropriate single column and composite indexes strategies.» Create tables as heaps and tables with clustered indexes. Also consider the design of a table and suggest an appropriate structure.» Read and interpret details of common elements from execution plans.» Design effective non-clustered indexes.» Design and implement views» Design and implement stored procedures.» Work with table types, table valued parameters and use the MERGE statement to create stored procedures that update data warehouses.» Design and implement functions, both scalar and table-valued. (Also describe where they can lead to performance issues).» Perform basic investigation of a deadlock situation and learn how transaction isolation levels affect application concurrency.» Use both traditional T-SQL error handling code and structured exception handling.» Design and implement DML triggers» Learn appropriate uses for SQL CLR integration and implement an existing .NET assembly within SQL Server.» Store XML data and schemas in SQL Server.» Perform basic queries on XML data in SQL Server20464: Developing Microsoft SQL Server Databases (SQL Server 2014 Update) Course Outline Module 1: Introduction to Database DevelopmentIntroduction to the SQL Server PlatformSQL Server Database Development TasksLab : Introduction to Database DevelopmentModule 2: Designing and Implementing TablesDesigning TablesData TypesWorking with SchemasCreating and Altering TablesPartitioning DataCompressing DataLab : Designing and Implementing TablesModule 3: Ensuring Data Integrity through ConstraintsEnforcing Data IntegrityImplementing Domain IntegrityImplementing Entity and Referential IntegrityLab : Ensuring Data Integrity through ConstraintsModule 4: Introduction to IndexingCore Indexing ConceptsData Types and IndexesSingle Column and Composite IndexesLab : Implementing IndexesModule 5: Designing Optimized Index StrategiesCovering IndexesManaging IndexesWorking with Execution PlansUsing the DTELab : Designing Optimized Index StrategiesModule 6: Columnstore IndexesIntroduction to Columnstore indexesCreating Columnstore IndexesWorking with Columnstore IndexesLab : Using Columnstore IndexesModule 7: Designing and Implementing ViewsIntroduction to ViewsCreating and Managing ViewsPerformance Considerations for ViewsLab : Designing and Implementing ViewsModule 8: Designing and Implementing Stored ProceduresIntroduction to Stored ProceduresWorking With Stored ProceduresImplementing Parameterized Stored ProceduresControlling Execution ContextLab : Designing and Implementing Stored ProceduresModule 9: Designing and Implementing User-Defined FunctionsOverview of FunctionsDesigning and Implementing Scalar FunctionsDesigning and Implementing Table-Valued FunctionsImplementation Considerations for FunctionsAlternatives to FunctionsLab : Designing and Implementing User-Defined FunctionsModule 10: Responding to Data Manipulation via TriggersDesigning DML TriggersImplementing DML TriggersAdvanced Trigger ConceptsLab : Responding to Data Manipulation via TriggersModule 11: Using In-Memory TablesIn-Memory TablesNative Stored ProceduresLab : In-Memory OLTPModule 12: Implementing Managed Code in SQL Server 2014Introduction to SQL CLR IntegrationImporting and Configuring AssembliesImplementing SQL CLR IntegrationLab : Implementing Managed Code in SQL Server 2014Module 13: Storing and Querying XML Data in SQL ServerIntroduction to XML and XML SchemasStoring XML Data and Schemas in SQL ServerImplementing the XML Data TypeUsing the T-SQL FOR XML StatementGetting Started with XQueryLab : Storing and Querying XML Data in SQL Server 2014Module 14: Working with SQL Server 2014 Spatial DataIntroduction to Spatial DataWorking with SQL Server Spatial Data TypesUsing Spatial Data in ApplicationsLab : Working with SQL Server Spatial DataModule 15: Incorporating Data Files into DatabasesQuerying Data with Stored ProceduresImplementing FILESTREAM and File TablesSearching Data FilesLab : Implementing a Solution for Storing Data Files
0 notes
sascourserablog · 8 years ago
Text
Week 3: Pearson’s R correlation
CODE
Data Management
import pandas as pd import numpy as np # initialize 'data' for relevant variables full_data = pd.read_csv('../ool_pds.csv', low_memory=False) full_data.columns = map(str.upper, full_data.columns) data = full_data[['PPAGE', 'W1_A11', 'W1_C2', 'W1_F1']].copy() # "How many days in the past week did you watch national news programs on television or on the Internet?" data['W1_A11'] = pd.to_numeric(data['W1_A11'], errors='coerce') # "We hear a lot of talk these days about liberals and conservatives. Where would you place YOURSELF on this 7 point scale?" data['W1_C2'] = pd.to_numeric(data['W1_C2'], errors='coerce') # "When you think about your future, are you generally optimistic, pessimistic, or neither optimistic nor pessimistic?" data['W1_F1'] = pd.to_numeric(data['W1_F1'], errors='coerce') #convert NA to numpy.nan data['W1_A11'] = data['W1_A11'].replace(-1, np.nan) data['W1_C2'] = data['W1_C2'].replace(-1, np.nan) data['W1_F1'] = data['W1_F1'].replace(-1, np.nan) #recode A11 into more logical categories (days last week watched news) recode_news = {1: 0, 2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 7} data['NEWSFREWEEK'] = data['W1_A11'].map(recode_news) # consolidate liberal/conservative categories (1: liberal, 2: moderate, 3: conservative) recode_political = {1:1, 2:1, 3:1, 4:2, 5:3, 6:3, 7:3} data['POLORIENT'] = data['W1_C2'].map(recode_political) # recode to moderates and non-moderates only data['POLORIENT2'] = data['POLORIENT'].map({1:0, 2:1, 3:0}) # Define as categorical varibales; rename categories data['POLORIENT'] = data['POLORIENT'].astype('category') data['POLORIENT'] = data['POLORIENT'].cat.rename_categories(["Liberal", "Moderate", "Conservative"]) #Reassign W1_F1 data['OPTIMISM'] = data['W1_F1'] data['OPTIMISM'] = data['OPTIMISM'].astype('category') data['OPTIMISM'] = data['OPTIMISM'].cat.rename_categories(["Optimistic", "Pessimistic", "Neither"]) # cut data into 18-34, 35 - 55, 56 - 120; make categorical data['AGEGRP'] = pd.cut(data.PPAGE, [17,35,55,120]) data['AGEGRP'] = data['AGEGRP'].astype('category') data['AGEGRP'] = data['AGEGRP'].cat.rename_categories(["18 - 35 y/o", "36 - 55 y/o", "> 55 y/o"])
Analysis
import matplotlib matplotlib.use('Agg') import datamodule import pandas as pd import seaborn as sb import scipy import matplotlib.pyplot as plt #load main dataframe as dataframe sub1 data_clean = datamodule.data.dropna() sb.regplot(x='PPAGE', y='NEWSFREWEEK', data=data_clean, scatter_kws={'s':2}) plt.xlabel('Age range') plt.ylabel('News viewing last week') plt.title('News viewing by age group') plt.savefig('scat_PPAGExNEWSFREWEEK') print ('association between age and frequency of news-viewing last week') print (scipy.stats.pearsonr(data_clean['PPAGE'], data_clean['NEWSFREWEEK']))
OUTPUT
association between age and frequency of news-viewing last week (0.37938999520262762, 7.0282748984812713e-76)
Interpretation
When running a scatter-plot of the data, a positive relationship is evident.
The Pearson's r value is ~.379. The relationship is statistically significant for p < .05 (p = 7.02e-76). The r^2 value is approximately .144. This means that 14.4% of the variability in news-viewing can be predicted with knowledge of the individual's age.
0 notes
sascourserablog · 8 years ago
Text
Week 2: Chi Square Test
CODE
Data Management
import pandas as pd import numpy as np # initialize 'data' for relevant variables full_data = pd.read_csv('../ool_pds.csv', low_memory=False) full_data.columns = map(str.upper, full_data.columns) data = full_data[['PPAGE', 'W1_A11', 'W1_C2', 'W1_F1']].copy() # "How many days in the past week did you watch national news programs on television or on the Internet?" data['W1_A11'] = pd.to_numeric(data['W1_A11'], errors='coerce') # "We hear a lot of talk these days about liberals and conservatives. Where would you place YOURSELF on this 7 point scale?" data['W1_C2'] = pd.to_numeric(data['W1_C2'], errors='coerce') # "When you think about your future, are you generally optimistic, pessimistic, or neither optimistic nor pessimistic?" data['W1_F1'] = pd.to_numeric(data['W1_F1'], errors='coerce') #convert NA to numpy.nan data['W1_A11'] = data['W1_A11'].replace(-1, np.nan) data['W1_C2'] = data['W1_C2'].replace(-1, np.nan) data['W1_F1'] = data['W1_F1'].replace(-1, np.nan) #recode A11 into more logical categories (days last week watched news) recode_news = {1: 0, 2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 7} data['NEWSFREWEEK'] = data['W1_A11'].map(recode_news) # consolidate liberal/conservative categories (1: liberal, 2: moderate, 3: conservative) recode_political = {1:1, 2:1, 3:1, 4:2, 5:3, 6:3, 7:3} data['POLORIENT'] = data['W1_C2'].map(recode_political) # recode to moderates and non-moderates only data['POLORIENT2'] = data['POLORIENT'].map({1:0, 2:1, 3:0}) # Define as categorical varibales; rename categories data['POLORIENT'] = data['POLORIENT'].astype('category') data['POLORIENT'] = data['POLORIENT'].cat.rename_categories(["Liberal", "Moderate", "Conservative"]) #Reassign W1_F1 data['OPTIMISM'] = data['W1_F1'] data['OPTIMISM'] = data['OPTIMISM'].astype('category') data['OPTIMISM'] = data['OPTIMISM'].cat.rename_categories(["Optimistic", "Pessimistic", "Neither"]) # cut data into 18-34, 35 - 55, 56 - 120; make categorical data['AGEGRP'] = pd.cut(data.PPAGE, [17,35,55,120]) data['AGEGRP'] = data['AGEGRP'].astype('category') data['AGEGRP'] = data['AGEGRP'].cat.rename_categories(["18 - 35 y/o", "36 - 55 y/o", "> 55 y/o"])
Analysis
import matplotlib matplotlib.use('Agg') import datamodule import pandas as pd import scipy.stats import seaborn as sb import matplotlib.pyplot as plt #load main dataframe as dataframe sub1 sub1 = datamodule.data # Political orientation --> Optimisim? # crosstab(response, explanitory) ct1 = pd.crosstab(sub1['OPTIMISM'], sub1['POLORIENT']) print (ct1) # column percentages colsum = ct1.sum(axis=0) colpct = ct1/colsum print (colpct) print('chi-square value, p value, expected counts') cs1 = scipy.stats.chi2_contingency(ct1) print (cs1) #create plot of col percentages sub1['OPTIMISM_num'] = sub1['W1_F1'].map({1 : 5, 2 : 3, 3 : 0}) sb.factorplot(x="POLORIENT", y = "OPTIMISM_num", data = sub1, kind='bar', ci=None) plt.xlabel('Political Orientation') plt.ylabel('Optimism') plt.savefig('POLORIENTxOPTIMISM') #adjusted p value: 3 compairsons; therefore, .05/3 = .016666 #create compairsons: recode = {'Liberal' : 'Liberal', 'Moderate' : 'Moderate'} sub1['COMPlibvmod'] = sub1['POLORIENT'].map(recode) recode = {'Liberal' : 'Liberal', 'Conservative' : 'Conservative'} sub1['COMPlibvcons'] = sub1['POLORIENT'].map(recode) recode = {'Conservative' : 'Conservative', 'Moderate' : 'Moderate'} sub1['COMPconsvmod'] = sub1['POLORIENT'].map(recode) ct_libvmod= pd.crosstab(sub1['OPTIMISM'], sub1['COMPlibvmod']) print (ct_libvmod) cs2 = scipy.stats.chi2_contingency(ct_libvmod) print (cs2) ct_libvcons = pd.crosstab(sub1['OPTIMISM'], sub1['COMPlibvcons']) print (ct_libvcons) cs3 = scipy.stats.chi2_contingency(ct_libvcons) print (cs3) ct_consvmod = pd.crosstab(sub1['OPTIMISM'], sub1['COMPconsvmod']) print (ct_consvmod) cs4 = scipy.stats.chi2_contingency(ct_consvmod) print (cs4)
OUTPUT
POLORIENT Liberal Moderate Conservative OPTIMISM Optimistic 402 439 361 Pessimistic 209 350 220 Neither 50 69 95 POLORIENT Liberal Moderate Conservative OPTIMISM Optimistic 0.608169 0.511655 0.534024 Pessimistic 0.316188 0.407925 0.325444 Neither 0.075643 0.080420 0.140533 chi-square value, p value, expected counts (36.568067634483867, 2.2107665794204401e-07, 4, array([[ 361.9690205 , 469.84783599, 370.18314351], [ 234.58724374, 304.50205011, 239.91070615], [ 64.44373576, 83.6501139 , 65.90615034]])) /home/ubuntu/miniconda3/lib/python3.6/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) COMPlibvmod Liberal Moderate OPTIMISM Optimistic 402 439 Pessimistic 209 350 Neither 50 69 (14.928784104541226, 0.00057313340361176497, 2, array([[ 365.96510862, 475.03489138], [ 243.25148124, 315.74851876], [ 51.78341014, 67.21658986]])) COMPlibvcons Conservative Liberal OPTIMISM Optimistic 361 402 Pessimistic 220 209 Neither 95 50 (16.28447650675713, 0.00029098517026542107, 2, array([[ 385.78010471, 377.21989529], [ 216.90650711, 212.09349289], [ 73.31338818, 71.68661182]])) COMPconsvmod Conservative Moderate OPTIMISM Optimistic 361 439 Pessimistic 220 350 Neither 95 69 (20.06530119066143, 4.3941533342804916e-05, 2, array([[ 352.54237288, 447.45762712], [ 251.18644068, 318.81355932], [ 72.27118644, 91.72881356]]))
Interpreatation
For this assignment, I used a Chi-square test to see whether political orientation and levels of optimism are independent or not. The null hypohtesis is: Self-proclaimed political orientation and level of optimism are independet.
First, I ran a test of counts and column percentages in order to properly summarize the two variables. Next, I ran a general Chi-square test of independence. This rejected the null hypothesis, with p-value < .001. Therefore, there is a dependent relationship between these two variables.
I was also able to explore this potential relationship visually, by roughly corresponding optimism responses to 'level of optimism' numerically. Larger bars indicate more optimism:
Next, I ran a post-hoc test in order to examine which pairs demonstrated a significantly dependent relationship. The p value will be adjusted to .0167 (.05 p-value / 3 comparisons). The post-hoc test rejected the null hypothesis for all compairsons (p < .0167).
Therefore, I can conclude using the Chi-square test that there is likely a dependent relationship between political orientation and self-proclaimed outlook about the future.
0 notes
sascourserablog · 8 years ago
Text
Week 1 Assignment: Hypothesis Testing and ANOVA
Note: for sake of brevity, I 'imported' data management tasks as 'datamodule.py'. See previous posts for data management code.
Code
import datamodule import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi #load main dataframe as dataframe sub1 sub1 = datamodule.data #dictionary of variable descriptions var_description = { 'AGEGRP': "Age (in years) [Categorical]", 'PPAGE' : "Age (in years) [Quantitative]", 'NEWSFREWEEK' : "Number of times has watched the news on television or Internet (last week)", 'POLORIENT' : "Political orientation (self-reported)" } # Hypthesis 1: C -> Q; POLORIENT -> NEWSFREWEEK # model1 = smf.ols(formula='RESPONSVAR ~ C(EXPLANITORYVAR)', data=data) model1 = smf.ols(formula='NEWSFREWEEK ~ C(POLORIENT)', data=sub1).fit() #show 'fit statistics' for model1 and print summary print(model1.summary()) #create dataframe sub_model1 with only variables in model1 sub_model1 = sub1[['NEWSFREWEEK', 'POLORIENT']].dropna() m_model1 = sub_model1.groupby('POLORIENT').mean() print(m_model1) sd_model1 = sub_model1.groupby('POLORIENT').std() print(sd_model1) # Hypothesis 1: post-hoc test mc1 = multi.MultiComparison(sub_model1['NEWSFREWEEK'], sub_model1['POLORIENT']) res1 = mc1.tukeyhsd() print(res1.summary()) # Hypothesis 2: C -> Q; AGEGRP -> NEWSFREWEEK sub_model2 = sub1[['NEWSFREWEEK', 'AGEGRP']].dropna() model2 = smf.ols(formula='NEWSFREWEEK ~ C(AGEGRP)', data=sub_model2).fit() print(model2.summary()) m_model2 = sub_model2.groupby('AGEGRP').mean() print(m_model2) sd_model2 = sub_model2.groupby('AGEGRP').std() print(sd_model2) # Hypothesis 2: Post-hoc test mc2 = multi.MultiComparison(sub_model2['NEWSFREWEEK'], sub_model2['AGEGRP']) res2 = mc2.tukeyhsd() print(res2.summary())
Hypothesis 1: POLORIENT -> NEWSFREWEEK ?
Output
OLS Regression Results ============================================================================== Dep. Variable: NEWSFREWEEK R-squared: 0.004 Model: OLS Adj. R-squared: 0.003 Method: Least Squares F-statistic: 4.782 Date: Tue, 06 Jun 2017 Prob (F-statistic): 0.00847 Time: 04:21:30 Log-Likelihood: -5285.5 No. Observations: 2227 AIC: 1.058e+04 Df Residuals: 2224 BIC: 1.059e+04 Df Model: 2 Covariance Type: nonrobust ================================================================================================ coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------------------ Intercept 3.3715 0.100 33.654 0.000 3.175 3.568 C(POLORIENT)[T.Moderate] -0.3439 0.133 -2.578 0.010 -0.606 -0.082 C(POLORIENT)[T.Conservative] 0.0101 0.141 0.072 0.943 -0.267 0.287 ============================================================================== Omnibus: 14.591 Durbin-Watson: 1.990 Prob(Omnibus): 0.001 Jarque-Bera (JB): 197.676 Skew: 0.200 Prob(JB): 1.19e-43 Kurtosis: 1.596 Cond. No. 3.91 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. NEWSFREWEEK POLORIENT Liberal 3.371471 Moderate 3.027586 Conservative 3.381579 NEWSFREWEEK POLORIENT Liberal 2.652810 Moderate 2.554208 Conservative 2.601781 Multiple Comparison of Means - Tukey HSD,FWER=0.05 ===================================================== group1 group2 meandiff lower upper reject ----------------------------------------------------- Conservative Liberal -0.0101 -0.341 0.3208 False Conservative Moderate -0.354 -0.6655 -0.0425 True Liberal Moderate -0.3439 -0.6568 -0.031 True -----------------------------------------------------
Summary
The F-statistic was 4.782 with a P-value of .008, which is in the bounds of scientific significance. (P < .05). A post-hoc test was necessary because there is more than 2 categories in the categorical variable. The post-hoc found that the difference btween conservatives and liberals was insignificant, but the difference between either both conservatives and liberals was significant.
Therefore, we can reject the null hypothesis:
Average amount of news-viewing for liberals = Average amount of news-viewing for moderates
Average amount of news-viewing for conseraties = Average amount of news-viewing for moderates
We FAIL to reject the null hypothesis:
Average amount of news-viewing for liberals = Average amount of news-viewing for conservatives.
Hypothesis 2: AGEGRP -> NEWSFREWEEK ?
Output
OLS Regression Results ============================================================================== Dep. Variable: NEWSFREWEEK R-squared: 0.135 Model: OLS Adj. R-squared: 0.134 Method: Least Squares F-statistic: 177.4 Date: Tue, 06 Jun 2017 Prob (F-statistic): 2.49e-72 Time: 04:21:30 Log-Likelihood: -5261.3 No. Observations: 2283 AIC: 1.053e+04 Df Residuals: 2280 BIC: 1.055e+04 Df Model: 2 Covariance Type: nonrobust ============================================================================================ coef std err t P>|t| [0.025 0.975] -------------------------------------------------------------------------------------------- Intercept 1.9909 0.103 19.246 0.000 1.788 2.194 C(AGEGRP)[T.36 - 55 y/o] 0.8357 0.133 6.292 0.000 0.575 1.096 C(AGEGRP)[T.> 55 y/o] 2.3571 0.132 17.894 0.000 2.099 2.615 ============================================================================== Omnibus: 476.425 Durbin-Watson: 1.976 Prob(Omnibus): 0.000 Jarque-Bera (JB): 97.451 Skew: 0.115 Prob(JB): 6.90e-22 Kurtosis: 2.014 Cond. No. 4.33 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. NEWSFREWEEK AGEGRP 18 - 35 y/o 1.990909 36 - 55 y/o 2.826651 > 55 y/o 4.348023 NEWSFREWEEK AGEGRP 18 - 35 y/o 2.124947 36 - 55 y/o 2.458034 > 55 y/o 2.566493 Multiple Comparison of Means - Tukey HSD,FWER=0.05 ===================================================== group1 group2 meandiff lower upper reject ----------------------------------------------------- 18 - 35 y/o 36 - 55 y/o 0.8357 0.5242 1.1472 True 18 - 35 y/o > 55 y/o 2.3571 2.0482 2.666 True 36 - 55 y/o > 55 y/o 1.5214 1.248 1.7948 True -----------------------------------------------------
Summary
ANOVA analysis shows an incredibly small P-value (2.49e-72). A post-hoc analysis was necessary because there is more than 2 categories for explanitory variable. According to post-hoc analysis, all means are significantly different.
We reject the following null hypotheses:
Average amount of news-viewing for 18-35 y/o = Average amount of news-viewing for 36-55 y/o
Average amount of news-viewing for 18-35 y/o = Average amount of news-viewing for > 55 y/o
Average amount of news-viewing for 36-55 y/o = Average amount of news-viewing for for > 55 y/o
0 notes