Filtern nach
Letzte Suchanfragen

Ergebnisse für *

Zeige Ergebnisse 1 bis 6 von 6.

  1. Do pre-registration and pre-analysis plans reduce p-hacking and publication bias?
    Erschienen: August 2022
    Verlag:  IZA - Institute of Labor Economics, Bonn, Germany

    Randomized controlled trials (RCTs) are increasingly prominent in economics, with pre-registration and pre-analysis plans (PAPs) promoted as important in ensuring the credibility of findings. We investigate whether these tools reduce the extent of... mehr

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 4
    keine Fernleihe

     

    Randomized controlled trials (RCTs) are increasingly prominent in economics, with pre-registration and pre-analysis plans (PAPs) promoted as important in ensuring the credibility of findings. We investigate whether these tools reduce the extent of p-hacking and publication bias by collecting and studying the universe of test statistics, 15,992 in total, from RCTs published in 15 leading economics journals from 2018 through 2021. In our primary analysis, we find no meaningful difference in the distribution of test statistics from pre-registered studies, compared to their non-pre-registered counterparts. However, pre-registerd studies that have a complete PAP are significantly less p-hacked. This results point to the importance of PAPs, rather than pre-registration in itself, in ensuring credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/265697
    Schriftenreihe: Discussion paper series / IZA ; no. 15476
    Schlagworte: pre-analysis plan; pre-registration; p-hacking; publication bias; research credibility
    Umfang: 1 Online-Ressource (circa 44 Seiten), Illustrationen
  2. We need to talk about Mechanical Turk
    what 22,989 hypothesis tests tell us about publication bias and p-hacking in online experiments
    Erschienen: August 2022
    Verlag:  IZA - Institute of Labor Economics, Bonn, Germany

    Amazon Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading... mehr

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 4
    keine Fernleihe

     

    Amazon Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erode substantially the credibility of these studies' conclusions. The extent of the problems vary across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/265699
    Schriftenreihe: Discussion paper series / IZA ; no. 15478
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk; p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 57 Seiten), Illustrationen
  3. We need to talk about Mechanical Turk
    what 22,989 hypothesis tests tell us about p-hacking and publication bias in online experiments
    Erschienen: 2022
    Verlag:  Global Labor Organization (GLO), Essen

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading... mehr

    Zugang:
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 565
    keine Fernleihe

     

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erodes substantially the credibility of these studies' conclusions. The extent of the problems vary across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/263216
    Schriftenreihe: GLO discussion paper ; no. 1157
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk; p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 56 Seiten), Illustrationen
  4. We need to talk about mechanical turk
    what 22,989 hypothesis tests tell us about publication bias and p-hacking in online experiments
    Erschienen: [2022]
    Verlag:  LCERPA, Laurier Centre for Economic Research & Policy Analysis, [Waterloo, ON]

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    VS 560
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: LCERPA working paper ; no. 2022, 4 (August 2022)
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk- p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 56 Seiten), Illustrationen
  5. We need to talk about mechanical turk
    what 22,989 hypothesis tests tell us about p-hacking and publication bias in online experiments
    Erschienen: November 2022
    Verlag:  Institute for Replication, Essen, Germany

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading... mehr

    Zugang:
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 831
    keine Fernleihe

     

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erodes substantially the credibility of these studies' conclu- sions. The extent of the problems vary across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/266266
    Schriftenreihe: I4R discussion paper series / Institute for Replication ; no. 8
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk; p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 58 Seiten), Illustrationen
  6. Methods matter
    p-gacking and causal inference in economics
    Erschienen: August 2018
    Verlag:  Department of Economics, Faculty of Social Sciences, University of Ottawa, Ottawa

    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Volltext (kostenfrei)
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: Working paper / Department of Economics, Faculty of Social Sciences, University of Ottawa ; 1809E
    Schlagworte: Research methods; causal inference; p-curves; p-hacking; publication bias
    Umfang: 1 Online-Ressource (circa 27 Seiten), Illustrationen