Browsing by Author "Filevich, Elisa"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Consensus goals for the field of visual metacognition(2021) Rahnev, Dobromir; Balsdon, Tarryn; Charles, Lucie; De Gardelle, Vincent; Denison, Rachel; Desender, Kobe; Faivre, Nathan; Filevich, Elisa; Fleming, Stephen; Jehee, Janneke; Lau, Hakwan; Lee, Alan; Locke, Shannon; Mamassian, Pascal; Odegaard, Brian; Peters, Megan; Reyes, Gabriel; Rouault, Marion; Sackur, Jérôme; Samaha, Jason; Sergent, Claire; Sherman, Maxine Tamara; Siedlecka, Marta; Soto, David; Vlassova, Alexandra; Zylberberg, ArielDespite the tangible progress in psychological and cognitive sciences over the last several years, these disciplines still trail other more mature sciences in identifying the most important questions that need to be solved. Reaching such consensus could lead to greater synergy across different laboratories, faster progress, and increased focus on solving important problems rather than pursuing isolated, niche efforts. Here, 26 researchers from the field of visual metacognition reached consensus on four long-term and two medium-term common goals. We describe the process that we followed, the goals themselves, and our plans for accomplishing these goals. If this effort proves successful within the next few years, such consensus-building around common goals could be adopted more widely in psychological science.Item Metacognitive Improvement: Disentangling Adaptive Training From Experimental Confounds(2021) Rouy, Martin; de Gardelle, Vincent; Sackur, Jérôme; Vergnaud, Jean Christophe; Filevich, Elisa; Faivre, Nathan; Reyes, GabrielMetacognition is defined as the capacity to monitor and control one’s own cognitive processes. Recently, Carpenter and colleagues (2019) reported that metacognitive performance can be improved through adaptive training: healthy participants performed a perceptual discrimination task, and subsequently indicated confidence in their response. Metacognitive performance, defined as how much information these confidence judgments contain about the accuracy of perceptual decisions, was found to increase in a group of participants receiving monetary reward based on their confidence judgments over hundreds of trials and multiple sessions. By contrast, in a control group where only perceptual performance was incentivized, metacognitive performance remained constant across experimental sessions. We identified two possible confounds that may have led to an artificial increase in metacognitive performance, namely the absence of reward in the initial session and an inconsistency between the reward scheme and the instructions about the confidence scale. We thus conducted a preregistered conceptual replication where all sessions were rewarded and where instructions were consistent with the reward scheme. Critically, once these two confounds were corrected we found moderate evidence for an absence of metacognitive training. Our data thus suggest that previous claims about metacognitive training are premature, and calls for more research on how to train individuals to monitor their own performance.Item The Confidence Database(2020) Rahnev, Dobromir; Desender, Kobe; Lee, Alan L. F.; Adler, William T.; Aguilar-Lleyda, David; Akdoğan, Başak; Arbuzova, Polina; Atlas, Lauren Y.; Balcı, Fuat; Bègue, Indrit; Birney, Damian P.; Brady, Timothy F.; Calder-Travis, Joshua; Chetverikov, Andrey; Clark, Torin K.; Davranche, Karen; Denison, Rachel N.; Double, Kit S.; Bang, Won Ji; Duyan, Yalçın A.; Faivre, Nathan; Fallow, Kaitlyn; Filevich, Elisa; Gajdos, Thibault; Gallagher, Regan M.; Gardelle, Vincent de; Gherman, Sabina; Haddara, Nadia; Hainguerlot, Marine; Hsu, Tzu-Yu; Hu, Xiao; Iturrate, Iñaki; Jaquiery, Matt; Kantner, Justin; Koculak, Marcin; Konishi, Mahiko; Koß, Christina; Kvam, Peter D.; Kwok, Sze Chai; Lebreton, Maël; Reyes, GabrielUnderstanding how people rate their confidence is critical for characterizing a wide range of perceptual, memory, motor, and cognitive processes. To enable the continued exploration of these processes, we created a large database of confidence studies spanning a broad set of paradigms, participant populations, and fields of study. The data from each study are structured in a common, easy-to-use format that can be easily imported and analyzed in multiple software packages. Each dataset is further accompanied by an explanation regarding the nature of the collected data. At the time of publication, the Confidence Database (available at osf.io/s46pr) contained 145 datasets with data from over 8,700 participants and almost 4 million trials. The database will remain open for new submissions indefinitely and is expected to continue to grow. We show the usefulness of this large collection of datasets in four different analyses that provide precise estimation for several foundational confidence-related effects.