Ergebnisse für *

Zeige Ergebnisse 1 bis 8 von 8.

  1. P-hacking, data type and data-sharing policy
    Erschienen: September 2022
    Verlag:  IZA - Institute of Labor Economics, Bonn, Germany

    In this paper, we examine the relationship between p-hacking and data-sharing policies for published articles. We collect 38,876 test statistics from 1,106 articles published in leading economic journals between 2002-2020. While a data-sharing policy... mehr

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 4
    keine Fernleihe

     

    In this paper, we examine the relationship between p-hacking and data-sharing policies for published articles. We collect 38,876 test statistics from 1,106 articles published in leading economic journals between 2002-2020. While a data-sharing policy increases the provision of research data to the community, we find a well-estimated null effect that requiring authors to share their data at the time of publication does not alter the presence of p-hacking. Similarly, articles that use hard-to-access administrative data or third-party surveys, as compared to those that use easier-to-access (e.g., own-collected) data are not different in their p-hacking extent. Voluntary provision of data by authors on their homepages offers no evidence of reduced p-hacking.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/265807
    Schriftenreihe: Discussion paper series / IZA ; no. 15586
    Schlagworte: p-hacking; publication bias; data and code availability; data sharing policy; administrative data; survey data
    Umfang: 1 Online-Ressource (circa 86 Seiten), Illustrationen
  2. Do pre-registration and pre-analysis plans reduce p-hacking and publication bias?
    Erschienen: August 2022
    Verlag:  IZA - Institute of Labor Economics, Bonn, Germany

    Randomized controlled trials (RCTs) are increasingly prominent in economics, with pre-registration and pre-analysis plans (PAPs) promoted as important in ensuring the credibility of findings. We investigate whether these tools reduce the extent of... mehr

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 4
    keine Fernleihe

     

    Randomized controlled trials (RCTs) are increasingly prominent in economics, with pre-registration and pre-analysis plans (PAPs) promoted as important in ensuring the credibility of findings. We investigate whether these tools reduce the extent of p-hacking and publication bias by collecting and studying the universe of test statistics, 15,992 in total, from RCTs published in 15 leading economics journals from 2018 through 2021. In our primary analysis, we find no meaningful difference in the distribution of test statistics from pre-registered studies, compared to their non-pre-registered counterparts. However, pre-registerd studies that have a complete PAP are significantly less p-hacked. This results point to the importance of PAPs, rather than pre-registration in itself, in ensuring credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/265697
    Schriftenreihe: Discussion paper series / IZA ; no. 15476
    Schlagworte: pre-analysis plan; pre-registration; p-hacking; publication bias; research credibility
    Umfang: 1 Online-Ressource (circa 44 Seiten), Illustrationen
  3. We need to talk about Mechanical Turk
    what 22,989 hypothesis tests tell us about publication bias and p-hacking in online experiments
    Erschienen: August 2022
    Verlag:  IZA - Institute of Labor Economics, Bonn, Germany

    Amazon Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading... mehr

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 4
    keine Fernleihe

     

    Amazon Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erode substantially the credibility of these studies' conclusions. The extent of the problems vary across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/265699
    Schriftenreihe: Discussion paper series / IZA ; no. 15478
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk; p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 57 Seiten), Illustrationen
  4. We need to talk about Mechanical Turk
    what 22,989 hypothesis tests tell us about p-hacking and publication bias in online experiments
    Erschienen: 2022
    Verlag:  Global Labor Organization (GLO), Essen

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading... mehr

    Zugang:
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 565
    keine Fernleihe

     

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erodes substantially the credibility of these studies' conclusions. The extent of the problems vary across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/263216
    Schriftenreihe: GLO discussion paper ; no. 1157
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk; p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 56 Seiten), Illustrationen
  5. We need to talk about mechanical turk
    what 22,989 hypothesis tests tell us about publication bias and p-hacking in online experiments
    Erschienen: [2022]
    Verlag:  LCERPA, Laurier Centre for Economic Research & Policy Analysis, [Waterloo, ON]

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    VS 560
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: LCERPA working paper ; no. 2022, 4 (August 2022)
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk- p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 56 Seiten), Illustrationen
  6. We need to talk about mechanical turk
    what 22,989 hypothesis tests tell us about p-hacking and publication bias in online experiments
    Erschienen: November 2022
    Verlag:  Institute for Replication, Essen, Germany

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading... mehr

    Zugang:
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 831
    keine Fernleihe

     

    Amazon's Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erodes substantially the credibility of these studies' conclu- sions. The extent of the problems vary across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/266266
    Schriftenreihe: I4R discussion paper series / Institute for Replication ; no. 8
    Schlagworte: online crowd-sourcing platforms; Amazon Mechanical Turk; p-hacking; publication bias; statistical power; research credibility
    Umfang: 1 Online-Ressource (circa 58 Seiten), Illustrationen
  7. Unpacking p-hacking and publication bias
    Erschienen: August 2023
    Verlag:  IZA - Institute of Labor Economics, Bonn, Germany

    We use unique data from journal submissions to identify and unpack publication bias and p-hacking. We find that initial submissions display significant bunching, suggesting the distribution among published statistics cannot be fully attributed to a... mehr

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 4
    keine Fernleihe

     

    We use unique data from journal submissions to identify and unpack publication bias and p-hacking. We find that initial submissions display significant bunching, suggesting the distribution among published statistics cannot be fully attributed to a publication bias in peer review. Desk-rejected manuscripts display greater heaping than those sent for review i.e. marginally significant results are more likely to be desk rejected. Reviewer recommendations, in contrast, are positively associated with statistical significance. Overall, the peer review process has little effect on the distribution of test statistics. Lastly, we track rejected papers and present evidence that the prevalence of publication biases is perhaps not as prominent as feared.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/279067
    Schriftenreihe: Discussion paper series / IZA ; no. 16369
    Schlagworte: Wissenschaftliche Publikation; Peer-Review-Verfahren; Statistischer Test; Systematischer Fehler; Bibliometrie; publication bias; p-hacking; selective reporting
    Umfang: 1 Online-Ressource (circa 89 Seiten), Illustrationen
  8. Methods matter
    p-gacking and causal inference in economics
    Erschienen: August 2018
    Verlag:  Department of Economics, Faculty of Social Sciences, University of Ottawa, Ottawa

    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Volltext (kostenfrei)
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: Working paper / Department of Economics, Faculty of Social Sciences, University of Ottawa ; 1809E
    Schlagworte: Research methods; causal inference; p-curves; p-hacking; publication bias
    Umfang: 1 Online-Ressource (circa 27 Seiten), Illustrationen