Methods & MetaScience is a journal club and discussion group organized by the School of Psychology & Neuroscience at the University of Glasgow. Meetings are every Wednesday from 12:00 to 13:00 UK time on Zoom. Coding Club meets every Wednesday from 16:05 to 17:00.
In addition to talks, we also run training events. If you would like to present or run a training workshop, please let Dale know!
Some of our events are open to the public and the zoom link will be posted here if so. Contact Dale for access to the Teams channel that has the recordings and zoom links for non-public events if you are University of Glasgow staff or student.
For a dynamically updated list of talks that you can subscribe to, please see the events list for M&Ms or Coding Club.
See GLMM Resources for resources related to our series on generalised linear mixed effects models.
See Data Visualisation Resources for resources related to visualising data using graphs and tables.
m m m m m m m m m m m m
Speaker | Title | Date/Time | Location |
---|---|---|---|
Martin Lages & Jack Taylor |
Implementing Bayesian hierarchical signal detection models in brms
Secondary data analysis of individual signal detection parameters is problematic and has been replaced by hierarchical/multi-level signal detection models for detection and discrimination tasks. These hierarchical models are typically implemented using specialised statistical software (BUGS, JAGS, STAN). However, it can be difficult to exploit advantages of these models for various experimental paradigms. Here, we use the versatile R-package brms which supports a surprising range of model variants. For example, (un)correlated mixed-effect Gaussian models with (un)equal variance are relatively straightforward to implement. Using existing data sets, it is illustrated how this can be achieved and which options are available. It is hoped that this approach will lead to a wider application of hierarchical signal detection models in psychological research and beyond. [from the organizer:] We encourage in-person attendance when possible. For anyone who needs to attend remotely, use the link below: [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Apr 24, 2024 12:00-13:00 |
Level 5, Seminar Room |
James Bartlett & Guillaume Rousselet |
Are replications held to a different standard than original research? A
review of replication success criteria
In an empirical article, authors can pose a hypothesis without
explicitly stating what pattern of results would support or falsify
their prediction. Conversely, when authors directly replicate an
article, they must specify how they will decide if the finding
replicated or not, posing the question whether replications are held to
a higher standard than original research. Criteria for replication
success range from consistency of statistical significance, confidence
intervals around an effect size estimate, to replication Bayes factors.
We will demonstrate different replication success criteria using data
from a recent project and review their strengths and weaknesses. The
talk will be useful for considering how to decide whether a finding
replicates, prompting you to think clearer about deciding what pattern
of results would support or refute a prediction in original research.
|
Wed, May 01, 2024 12:00-13:00 |
Level 5, Seminar Room |
Rich Ivry University of California, Berkeley |
Paper Discussion with Rich Ivry: “No Evidence for Semantic Prediction
Deficits in Individuals with Cerebellar Degeneration”
For this session of M&Ms, we are delighted to have Rich Ivry who will discuss his recent paper with Maedbh King and Sienna Bruinsma, “No Evidence for Semantic Prediction Deficits in Individuals with Cerebellar Degeneration.” We encourage in person attendance when possible. For those who require remote attendance, use the following Zoom link: [Zoom link hidden. Please log in to view it, or contact the event organiser] Abstract: Cerebellar involvement in language processing has received considerable attention in the neuroimaging and neuropsychology literatures. Building off the motor control literature, one account of this involvement centers on the idea of internal models. In the context of language, this hypothesis suggests that the cerebellum is essential for building semantic models that, in concert with the cerebral cortex, help anticipate or predict linguistic input. To date, supportive evidence has primarily come from predictions are generated and violated. Taking a neuropsychological approach, we put the internal model hypothesis to the test, asking if individuals with cerebellar degeneration (n=14) show reduced sensitivity to semantic prediction. Using a sentence verification task, we compare reaction time to sentences that vary in terms of cloze probability. We also evaluated a more constrained variant of the prediction hypothesis, asking if the cerebellum facilitates the generation of semantic predictions when the content of a sentence refers to a dynamic rather than static mental transformation. The results failed to support either hypothesis: Compared to matched control participants (n=17), individuals with cerebellar degeneration showed a similar reduction in reaction time (RT) for sentences with high cloze probability and no selective impairment in predictions involving dynamic transformations. These results challenge current theorizing about the role of the cerebellum in language processing, pointing to a misalignment between neuroimaging and neuropsychology research on this topic. |
Wed, May 08, 2024 12:00-13:00 |
Level 5, Seminar Room |
|
Discussion: Pothos & Busemeyer (2022). Review article on “Quantum cognition” |
Wed, May 15, 2024 12:00-13:00 |
Level 5, Seminar Room |
Sean Roberts |
Sean Roberts (title TBA) |
Wed, May 29, 2024 12:00-13:00 |
Level 5, Seminar Room |
m m m m m m m m m m m m
Speaker | Title | Date/Time | Location | ||||||
---|---|---|---|---|---|---|---|---|---|
Emily Long School of Health & Wellbeing, MRC/CSO Social and Public Health Sciences Unit |
Social network analysis: Applications to health research
Dr. Emily Long is a Research Fellow in the School of Health & Wellbeing, MRC/CSO Social and Public Health Sciences Unit. Her research focuses on the intersection of social relationships and mental wellbeing, typically amongst young people. In this talk, she will provide a basic introduction to the field of social network analysis, discussing the types of study design, research questions, and unique insight these methods can provide. Preliminary findings from an applied project on mental health support in rural Scotland will then be discussed. [from the organizer:] We encourage in-person attendance whenever possible, but if you need to attend remotely, please use the following link: [Zoom link hidden. Please log in to view it, or contact the event organiser] Meeting ID: 825 9616 2744 Passcode: 750341 |
Wed, Apr 10, 2024 12:00-13:00 |
Level 5, Seminar Room | ||||||
Dale Barr, Christoph Scheepers, & Zofia Hauke |
Assessing concurrent activation during language processing: A
gaze-contingent alternative to the visual-world paradigm
For people who use eyetracking to study human cognition, it is an inescapable problem that the eyes can only be in one place while the mind can be in many. The problem is particularly relevant to visual-world studies of language processing, which typically use eye gaze probabilities as an estimate of hypothesized activation levels inside the minds of language users. But these probabilities themselves can only be estimated at the group level, and it is a known fallacy to draw inferences about individual-level cognition from group-level data. We present an idea for a new gaze-contingent paradigm that we believe can get at these activation levels, and discuss our plans for a validation study and grant proposal. [from the organizer:] We encourage in-person attendance when possible, but also welcome remote attendees who are willing to keep their cameras on during discussion phases of the meeting. [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Apr 03, 2024 12:00-13:00 |
Level 5, Seminar Room | ||||||
Jinghui Liang & Dale Barr University of Glasgow |
Better power through design: A quasirandomization strategy to reclaim
untapped power in repeated-measures experiments
Low power is a longstanding problem for laboratory studies in the cognitive sciences, leading science reformers to advocate increasing sample sizes. Alternatively, we suggest that researchers may be able to increase power by tapping into an unexpected source of power: variation in measurement over time. The current field-wide standard in laboratory studies with repeated measurements is to randomize the presentation order of conditions without constraint. This standard is subobtimal given that measurements temporally vary due to such factors as learning effects, fatigue, mind wandering, or the waxing and waning of attention. We introduce a simple quasirandomization algorithm—bucket-constrained randomization (BCR)—that increases power by ensuring a more uniform distribution of conditions over time. We explored the performance of BCR across three scenarios of time-dependent error: exponential decay, Gaussian random walk, and pink noise. Monte Carlo simulations suggest that BCR can potentially boost power between 3% and 18%, making research more efficient, while maintaining nominal false positive rates when time-dependent variation is absent. [from the organizer:] We encourage people to attend in person when possible, but provide the link below for anyone who requires remote attendance. Please note our “cameras on” policy for remote attendees. [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Mar 13, 2024 12:00-13:00 |
Level 5, Seminar Room | ||||||
Xuanyi Ma University of Glasgow |
Towards a systematic approach for the measurement of real-time
psycho-physiological and emotional responses to music
Music, as a “language of emotions” (Cooke, 1959), plays an important role in many people’s daily lives. Its ability to communicate and induce emotions is valued by audiences (Juslin & Laukka, 2004). Although post-listening measurement is widely adopted in the studies of musical emotions, as music is an “art of time” (Alperson, 1980, p.407), this study is focused on the real-time measurement of participants’ psychophysiological and self-reported emotional response to music. Participants were asked to report their real-time emotional responses to two pieces of sad erhu music while their psycho-physiological responses (Galvanic Skin Response and finger pulse) were recorded simultaneously using Continuous Response Measurement Apparatus (Himonides, 2011) at a sampling rate of 1000Hz. The foci of this seminar will be put on the experimental design and time-series data processing of psycho-physiological response. Some interesting findings and the discussion of the enjoyment of negative emotions may also be covered if time permits. Reference: Alperson, P. (1980). “Musical Time” and Music as an “Art of Time”. The Journal of Aesthetics and Art Criticism, 38(4), 407–417. Cooke, D. (1959). The language of music. Oxford University Press. Himonides, E. (2011). Mapping a beautiful voice: The continuous response measurement apparatus (CReMA). Journal of Music, Technology & Education, 4(1), 5–25. Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of new music research, 33(3), 217–238. [from the organizer:] We encourage people to attend in person when possible, but provide the link below for anyone who requires remote attendance. Please note our “cameras on” policy for remote attendees. [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Mar 06, 2024 12:00-13:00 |
Level 6, Meeting Room | ||||||
[group discussion] |
Almaatouq et al (in press), Beyond Playing 20 Questions With Nature
In this meeting we will discuss the following paper: Beyond Playing 20 Questions with Nature: Integrative Experiment Design in the Social and Behavioral Sciences Abdullah Almaatouq, Thomas L. Griffiths, Jordan W. Suchow, Mark E. Whiting, James Evans, Duncan J. Watts The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment’s specific conditions. According to this view, which Alan Newell once characterized as “playing twenty questions with nature,” theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. The researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm—and with far greater efficiency. also available here if you can’t get through the paywall: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4284943 [from the organizer:] We encourage people to attend in person when possible, but provide the link below for anyone who requires remote attendance. Please note our “cameras on” policy for remote attendees. [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Feb 21, 2024 12:00-13:00 |
Level 6, Meeting Room | ||||||
James Allen School of Health & Wellbeing |
Agent-based modeling in social psychology
In this talk I will present some background on the use of agent-based models (ABM) in social psychology, including the motivation and techniques used. Following this, I will present some results from three models from my own work, investigating inequality between groups; health behaviour change in romantic couples; and current work in progress investigating adolescent mental health. [from the organizer:] In-person attendance is encouraged. For those who require remote attendance the link is: [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Jan 24, 2024 12:00-13:00 |
Level 5, Seminar Room | ||||||
Tom Booth Psychology, University of Edinburgh |
Psychometrics (Title TBA) |
Wed, Jan 10, 2024 12:00-13:00 |
Level 5, Seminar Room | ||||||
Charles Burns |
Uncovering bias in data-driven estimates of reproducibility
The replication crisis in psychology and cognitive neuroscience is underpinned by underpowered studies, begging the question of how many participants are required for reliable science. Recently, in a widely circulated study, Marek et al. (2022) suggested that reproducible brain-wide association studies require thousands or participants. Here, we take a closer look at their meta-analytic methods, in particular estimates of statistical power resulting from resampling with replacement from a large dataset. Using simulated ground truth data, we demonstrate that their method is susceptible to large overestimates of statistical power which risk misleading the field. We discuss how to avoid overestimating power and further implications for the reproducibility of univariate brain-wide association studies, commenting on wider scientific methods. |
Wed, Nov 29, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
James Bartlett & Jamie Murray |
Open Session
James and Jamie are looking for some feedback on their analyses. Each will give a short presentation and get feedback from the group. James will present on a direct replication project on correcting statistical misinformation. He will walk through replicating the target study’s analyses, before exploring how to address potential limitations in their approach. A Zoom option is available for those who cannot attend in person: [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Nov 22, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
|
Discussion of Chater et al. (2022): The Paradox of Social Interaction:
Shared Intentionality, We-Reasoning, and Virtual Bargaining
In preparation for Nick Chater’s Friday seminar and virtual ‘visit’ to our department on Friday, November 10th, we will hold a discussion of his most recent paper on the topic of Virtual Bargaining. Chater, N., Zeitoun, H., & Melkonyan, T. (2022). The paradox of social interaction: Shared intentionality, we-reasoning, and virtual bargaining.Psychological Review, 129(3), 415–437. https://doi.org/10.1037/rev0000343 |
Wed, Nov 08, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
|
CHANGED TO ONLINE: Discussion of Demszky et al. (2023): Using
large language models in psychology
PLEASE NOTE: By popular demand we have decided to make this an online-only discussion of the paper. We will discuss the recent Nature paper by Demszky et al. (2023), “Using Large Language Models in Psychology,” which covers the many potential ways that LLMs could have a transformative effect on the field. https://doi.org/10.1038/s44159-023-00241-5 |
Wed, Nov 01, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Danny Krupp Department of Interdisciplinary Studies, Lakehead University |
Rethinking Relative Deprivation
People have long been thought to feel frustration, resentment, and anger
when faced with (i) similar others who are (ii) unfairly advantaged—a
phenomenon known as relative deprivation. Contrary to this, I argue that
there has never been any clear theoretical or empirical justification
for the causal status of either similarity or fairness in the experience
of relative deprivation. Instead, I propose that relative deprivation is
primarily a response to competition, and that arguments about similarity
and fairness are unwittingly bound up with this. Using evolutionary game
theory, I show that conflict over resources readily evolves without any
appeal to similarity or fairness. Moreover, when introduced into the
model, similarity: has no effect on conflict when it indexes group
membership; reduces conflict when it indexes genealogical kinship;
reduces conflict when it indexes interdependence; and incites conflict
when partners directly compete. Further, tying the costs of conflict and
value of the resource to individual differences in baseline fitness has
a striking effect on conflict. These findings are at odds with
conventional theory, suggesting that similarity per se is not an
important cause of relative deprivation, but that competition is. By
corollary, fairness may be better understood not as a cause of relative
deprivation, but as a result of it—being invoked in competitive contexts
to justify conflict.
|
Wed, Oct 18, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Jinghui Liang |
Discussion: Simulating autocorrelated questionnaire datasets
Time-series effects in questionnaire item responses (e.g., practice effects, mental fatigue) lead to autocorrelated residual structures in datasets. In traditional psychometrics analytical methods, such structures are difficult to be recognized since they are washed out on the subject level. By non-linear modelling techniques and quasi-randomized experimental design strategies, we found that the autocorrelated components for each subject can be captured and deconfounded from the mixture of effects. To examine the capacities of some design strategies in controlling autocorrelated residuals, we would like to develop functions to apply Monte Carlo simulation analysis to the questionnaire dataset. However, we found it challenging. In this talk, I will briefly demonstrate how to simulate a common questionnaire dataset in an ideal scenario and lead the discussion about possible solutions to simulate questionnaire datasets with autocorrelated residual structures. [from the organizer:] We prefer in-person attendance when possible. If you require remote attendance, use the Zoom link below. Please note that we have a ‘cameras on’ policy for remote attendees. [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Jun 28, 2023 12:00-13:00 |
Level 6, Meeting Room | ||||||
Camilo Rodriguez Ronderos University of Oslo |
What pupillometry can tell us about lexical pragmatics: The case of
imprecision
Theoretical accounts of imprecision (e.g., saying that a line is straight when it is not ‘perfectly’ straight) claim that this is a type of pragmatic adjustment licensed by context. However, it is unclear how this adjustment translates to online sentence processing: Does it require more mental effort to understand imprecise relative to precise usages of the word ‘straight’? In this talk I will present evidence from pupillometry in support of theoretical accounts of the pragmatics of imprecision. I will also discuss possible linking hypotheses between pupil dilation and comprehension of pragmatics with the goal of better understanding the temporal dynamics of lexical pragmatic processing. [from the organiser:] We prefer in-person attendance when possible. If you require remote attendance, use the Zoom link below. Please note that we have a ‘cameras on’ policy for remote attendees. [Zoom link hidden. Please log in to view it, or contact the event organiser] |
Wed, Jun 21, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Dale Barr University of Glasgow |
Analysing circular data
For this session, I will summarize and lead discussion of the following paper: Cremers & Klugkist (2018). One direction? A Tutorial for Circular Data Analysis Using R With Examples in Cognitive Psychology. Frontiers In Psychology. https://doi.org/10.3389/fpsyg.2018.02040 The event will take place in person, but we also offer limited places for remote attendance (via Zoom). If you need to attend remotely, please obtain a ticket via EventBrite: https://www.eventbrite.co.uk/e/mms-modelling-circular-data-tickets-630921431987 ABSTRACT: Circular data is data that is measured on a circle in degrees or radians. It is fundamentally different from linear data due to its periodic nature (0° = 360°). Circular data arises in a large variety of research fields. Among others in ecology, the medical sciences, personality measurement, educational science, sociology, and political science circular data is collected. The most direct examples of circular data within the social sciences arise in cognitive and experimental psychology. However, despite numerous examples of circular data being collected in different areas of cognitive and experimental psychology, the knowledge of this type of data is not well-spread and literature in which these types of data are analyzed using methods for circular data is relatively scarce. This paper therefore aims to give a tutorial in working with and analyzing circular data to researchers in cognitive psychology and the social sciences in general. It will do so by focusing on data inspection, model fit, estimation and hypothesis testing for two specific models for circular data using packages from the statistical programming language R. |
Wed, May 24, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Discussion |
“Investigating Lay Perceptions of Psychological Measures” by Mason,
Pownall, Palmer, & Azevedo
James Bartlett will lead discussion of the preprint, “Investigating Lay Perceptions of Psychological Measures” by Mason, Pownall, Palmer, & Azevedo. Link: https://psyarxiv.com/jf58q Abstract of the preprint: In recent years, the reliability and validity of psychology measurement practices has been called into question, as part of an ongoing reappraisal of the robustness, reproducibility, and transparency of psychological research. While useful progress has been made, to date, the majority of discussions surrounding psychology’s measurement crisis have involved technical, quantitative investigations into the validity, reliability, and statistical robustness of psychological measures. This registered report offers a seldom-heard qualitative perspective to these ongoing debates, critically exploring members of the general publics’ (i.e., non-experts) lay perceptions of widely used measures in psychology. Using a combination of cognitive interviews and a think aloud study protocol, participants (n = 23) completed one of three popular psychology measures. Participants reflected on each of the measures, discussed the contents, and provided perceptions of what the measures are designed to test. Coding of the think aloud protocols concluded that participants across the measures regularly experienced issues in interpreting and responding to items. Thematic analysis of the cognitive interviews identified three dominant themes that each relate to lay perceptions of psychology measurements. These were: (1) participants’ grappling with attempting to ‘capture their multiple selves’ in the questionnaires, (2) participants perceived the questionnaire method as generally ‘missing nuance and richness’ and (3) exposing the ‘hidden labour of questionnaires’. These findings are discussed in the context of psychology’s measurement reform. [from the organiser:] As context for the discussion, you may find it useful to read Flake & Fried (2020), “Measurement Schmeasurement: Questionable Measurement Practices and How To Avoid Them,” https://psycnet.apa.org/doi/10.1177/2515245920952393 The event will take place in person, but we also offer limited places for remote attendance (via Zoom). If you need to attend remotely, please obtain a ticket via EventBrite: https://www.eventbrite.co.uk/e/mms-investigating-lay-perceptions-of-psychological-measures-tickets-630913468167 |
Wed, May 17, 2023 12:00-13:00 |
Level 6, Meeting Room | ||||||
Pablo Arias |
Manipulating social signals in real time during social interactions
Over the past few years, we have been developing a new methodology to study human social cognition. Specifically, we have been using voice and face transformation algorithms to manipulate participants’ social signals in real time during social interactions. I will present two experiments utilizing this paradigm. First, an experiment where we manipulated vocal dominance to study its impact on group decision-making. Second, an experiment where we artificially aligned participants’ smiles during speed dating to elicit romantic attraction. I will explore these studies from a methodological perspective. First, I will analyse the types of scientific claims we can make with these datasets, as well as the phenomenological differences in comparison to non-transformed social interactions. Second, I will highlight the shared methodological structures that underlie these research projects—ranging from algorithmic specification to manuscript preparation and writing. I will conclude by introducing our new, free, and open-source experimental platform DuckSoup, which aims to enable researchers to run these kinds of experiments online, and which we are currently setting-up in the servers of the department. [from the organizer:] The event will take place in person, but we also offer limited places for remote attendance (via Zoom). If you need to attend remotely, please obtain a ticket via EventBrite: https://www.eventbrite.co.uk/e/manipulating-social-signals-in-real-time-during-social-interactions-tickets-626191434437 |
Wed, May 03, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Christoph Scheepers |
On k-means clustering and finding needles in haystacks
[this event will be online-only rather than in person] K-means cluster analysis is an unsupervised machine learning technique for the purpose of partitioning N multivariate observations into K (< N) clusters, such that each observation belongs to the cluster with the nearest mean or cluster centroid. It is one of the simplest and most popular techniques for multivariate classification. The number of clusters (K) is usually unknown to the researcher and has to be estimated on the basis of data-driven optimization criteria (for which there are many different suggestions and approaches). There are different algorithms to choose from (with Hartigan-Wong being by far the most efficient) and an additional nuisance is that the clustering results partly depend on starting conditions. In this talk, I will compare random initialization methods (randomly select K observations as initial cluster centres) with a more deterministic approach to cluster initialization. Using a real data set (100 cases, two clustering variables), I will show that with increasing K, clustering becomes increasingly dependent on random starting conditions (regardless of algorithm), such that from a certain number of clusters, random initialization becomes a more important determinant of the final partitioning than the algorithm itself. Moreover, even a large number of random starts (say 100,000) does not always guarantee to find the ‘best’ partitioning for a given K, as deterministic initialization often finds better solutions in a fraction of the time. All of the above has important implications for determining the number of clusters in a data-driven manner. |
Wed, Apr 26, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Christoph Scheepers |
On k-means clustering and finding needles in haystacks
K-means cluster analysis is an unsupervised machine learning technique
for the purpose of partitioning N multivariate observations into K (<
N) clusters, such that each observation belongs to the cluster with the
nearest mean or cluster centroid. It is one of the simplest and most
popular techniques for multivariate classification. The number of
clusters (K) is usually unknown to the researcher and has to be
estimated on the basis of data-driven optimization criteria (for which
there are many different suggestions and approaches). There are
different algorithms to choose from (with Hartigan-Wong being by far the
most efficient) and an additional nuisance is that the clustering
results partly depend on starting conditions. In this talk, I will
compare random initialization methods (randomly select K observations as
initial cluster centres) with a more deterministic approach to cluster
initialization. Using a real data set (100 cases, two clustering
variables), I will show that with increasing K, clustering becomes
increasingly dependent on random starting conditions (regardless of
algorithm), such that from a certain number of clusters, random
initialization becomes a more important determinant of the final
partitioning than the algorithm itself. Moreover, even a large number of
random starts (say 100,000) does not always guarantee to find the ‘best’
partitioning for a given K, as deterministic initialization often finds
better solutions in a fraction of the time. All of the above has
important implications for determining the number of clusters in a
data-driven manner.
|
Wed, Apr 19, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Sander van Bree |
The rise of AI as a tool for writing, coding, and thinking in academia
If you ask GPT-4 to summarize a body of literature, debug your code, recommend a statistical analysis, or sketch your next paper’s introduction, it will give you meaningful answers. And with tech companies ramping up investments, Large Language Models (LLMs) will only get smarter. What does this mean for academic life? In this session, I will provide a forward-looking perspective on LLMs, exploring both their harms and benefits. On the one hand, productivity will increase, and a variety of barriers will disappear. On the other, LLMs might jeopardize core academic values such as independent thought—or even represent a threat to the future of knowledge work. These and other considerations serve to prime an open discussion for people to share their own thoughts and opinions, exploring together what consequences AI might have for science and academia at large. [from the organizer:] To ensure we have adequate time to discuss this important topic, the event will run for 90 minutes instead of the usual 60 minutes. Given the potential popularity of this topic, we have reserved the Seminar Room (L5) rather than the usual Meeting Room. The event will take place in person, but we also offer limited places for remote attendance (via Zoom). If you need to attend remotely, please obtain a ticket via EventBrite: https://www.eventbrite.co.uk/e/mms-with-sander-van-bree-the-rise-of-ai-in-academia-tickets-596009970867 |
Wed, Mar 29, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Simon Hanslmayr |
Plans for a UofG Neurotech Centre
For our meeting this week, we have invited Simon Hanslmayr from CCNi to
discuss his plans to develop a Centre for Neurotechnology. Come and
learn about his plans and find out how you might become involved.
|
Wed, Mar 22, 2023 12:00-13:00 |
Level 6, Meeting Room | ||||||
James Bartlett University of Glasgow |
A whole lotta nothing: Comparing statistical approaches to supporting
the null
Traditional frequentist hypothesis tests do not allow us to support the null hypothesis. There are scenarios where supporting the null would be a desirable inference, but you rarely see researchers use appropriate techniques in empirical psychology articles. In this talk, I will introduce and compare different approaches to supporting the null such as equivalence testing and Bayes factors. I will organise the talk around applying different approaches to the same research question and data set. I will provide supporting code, so I assume you can comfortably use R. A general understanding of Bayesian statistics will be useful but I will provide a brief introduction. [from the organizer:] This event will take place in person, but remote attendance via Zoom is also available. If you wish to attend remotely, please obtain a ticket via EventBrite: https://www.eventbrite.co.uk/e/james-bartlett-comparing-statistical-approaches-to-supporting-the-null-tickets-526292173057 |
Wed, Mar 08, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Tarryn Balsdon |
Pre-registration
Pre-registration is the practice of writing out your experimental design and plan for data collection and analysis on a register, prior to commencing your experiment. I will highlight some of the recent discussions about pre-registration and outline how I have found pre-registration with the open science framework helpful in my work. [message from the organizer:] This will be primarily an in-person discussion event. If you need to attend remotely, please sign up on EventBrite, and note our ‘cameras on’ policy for remote attendees. https://www.eventbrite.co.uk/e/mms-tarryn-balsdon-pre-registration-tickets-529072669597 |
Wed, Mar 01, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
|
Drop-in help session
Come to today’s session if you are looking to get advice or discuss any
aspect related to research methods and statistics. It will be an open
forum, and all are welcome to attend!
|
Wed, Feb 22, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Katie
Hoemann University of Leuven |
Using multimodal ambulatory sensing to capture emotion in context
Emotional experience unfolds as people make meaning of their body in context. To study this process ‘in the field’ of daily life, I use methods of ambulatory sensing capable of dense, multimodal data capture and pair these with dynamical and unsupervised analytic techniques. In this talk, I present my work using biologically triggered experience sampling and machine learning to model situated patterns of cardiorespiratory activity and how they map onto self-reported experiences and individual differences in emotion. I also present current and upcoming work incorporating methods from cognitive and computational linguistics to gain insight into descriptions of everyday experience and how these may vary across people and cultures. Throughout, I consider the potential impacts of these approaches for both theory and application – what can they tell us about the dynamics of embodied experience and meaning-making, and how might they help us tackle pressing social problems? [from the organizer:] This event will take place in person, but remote participation is also available. If you need to participate remotely, please sign up on EventBrite. We have a ‘cameras-on’ policy for remote participants. https://www.eventbrite.co.uk/e/mms-katie-hoemann-using-multimodal-ambulatory-sensing-tickets-520334964887 |
Wed, Feb 08, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Jinghui Liang |
Better administration of psychometric surveys with a jsPsych-based
collection platform
Similar to psycholinguistic experiments, participants’ responses to questions in questionnaire surveys are affected by time-varying fluctuations (e.g., mental fatigue) and contextual effects (e.g., priming effect). Ignoring these effects in analyse will lead to biased estimations of reliability and validity since the effects being examined are confounded by wording or temporal characteristics. Inspired by analysis strategies in experiments, the reaction time (RT) for each trial is an efficient index to describe the time-varying fluctuations, and (quasi) randomisation/counterbalanced designs for presentation orders can offset these effects on subject-level. However, existing survey data collection platforms provide very limited flexibility to administrate psychometric surveys in this way. In this session, I’ll discuss details about these topics and introduce a new platform based on jsPsych to collect survey data. With jsPsych and common programming languages (R/JavaScript/PHP), this platform provides the possibility to collect RT for each question and high flexibility to organise the presentation order of a questionnaire. These properties help researchers better design and administrate their online survey. A demonstration of this platform can be found at https://github.com/Jinghui-Liang/rt_survey_demo if you are interested in it. [from the organizer:] This event will take place in person, but remote participation is also available. If you need to participate remotely, please sign up on EventBrite. We have a ‘cameras-on’ policy for remote participants. https://www.eventbrite.co.uk/e/mms-jinghui-liang-better-administration-of-psychometric-surveys-tickets-520332838527 |
Wed, Jan 25, 2023 12:00-13:00 |
Level 5, Seminar Room | ||||||
Lisa DeBruine University of Glasgow |
Workshop on Data Simulation in R using {faux}
This session will cover the basics of simulation using {faux}. We will simulate data with factorial designs by specifying the within and between-subjects factor structure, each cell mean and standard deviation, and correlations between cells where appropriate. This can be used to create simulated data sets to be used in preparing the analysis code for pre-registrations or registered reports. We will also create data sets for simulation-based power analyses. Attendees will need to have very basic knowledge of R and R Markdown, and have installed {faux}, {afex}, {broom} and {tidyverse}. Prep Install R packages from CRAN: tidyverse, afex, faux, and broom Download files: faux-stub.Rmd Please bring a laptop if you want to participate. The event will be recorded and made available following the event for people who are unable to attend. |
Wed, Jan 18, 2023 12:00-13:00 |
Level 6, Meeting Room | ||||||
Dale Barr University of Glasgow |
Rebuilding Academic Communities After Lockdown: The View From
Meta-Science
[NB: This will be a discussion event that available for in-person attendance only] The COVID-19 pandemic brought about many changes to academic norms for working and associating, changes which seem to have eroded local community structure. What does this matter for metascience? I will defend the thesis that strong local communities are essential for improving research practice. To an isolated researcher, adopting open science practices may appear too risky and costly to be worth pursuing. In the view of computational social science, such practices are “complex contagions” that have dramatically different diffusion requirements as compared to simple contagions (e.g., the spread of news or viruses). In particular, they require social reinforcement from multiple actors, which in turn depends on the existence of dense local connections or “wide bridges” (Centola & Macy, 2007). Little is known about how these wide bridges come about, but I will present some speculative ideas drawn from sociology (specifically, Randall Collins’ Interaction Ritual theory) as well as from the psychology of communication that suggests a critical role for face-to-face interaction. |
Wed, Jan 11, 2023 12:00-13:00 |
Level 6, Meeting Room | ||||||
Juliane Kloidt |
[Presentation+Discussion] A conceptual and practical introduction to
Structural Equation Modelling – Part 2/2
Structural Equation Modelling (SEM) allows researchers to assess complex relationships between latent and observed variables, thereby specifying direct and mediating effects for multiple dependent variables. Whilst SEM has been praised for its analytic flexibility, disagreements about “good model fit” persist. The second talk for this mini-series presents the six modelling steps of (1) specification, (2) identification, (3) estimation, (4) evaluation, (5) re-specification, and (6) interpretation. Using worked examples, I will explain how to conduct path and mediation analyses, and how to calculate bootstrapped effect sizes. Throughout the presentation, I will discuss “good model fit” alongside issues of equivalent models, statistical power, and model overfitting. We will start this talk with a quick recap of last week’s session. So, please join us even if you could not attend the first session.
Note from Dale Barr, the organizer: This will be a primarily in-person discussion event. There will be a limited number of spots available for remote participation, which you can access through EventBrite: https://www.eventbrite.co.uk/e/mms-discussion-of-tremblay-et-al-tickets-472172309087 To help everyone feel included, we will ask remote participants to turn on their cameras. |
Wed, Nov 23, 2022 11:00-12:00 |
Level 5, Seminar Room | ||||||
James Bartlett University of Glasgow |
APA-style manuscripts with RMarkdown and papaja
Preparing manuscripts traditionally includes writing text, pasting results, and manually formatting references. In addition to the wasted time reformatting manuscripts for different journals, copying and pasting results introduces greater potential for error. More researchers are sharing data and analysis scripts, but the code might not be reproducible. The ability to address all these concerns into one workflow may improve efficiency and reduce errors. In this tutorial, I will demonstrate the R package papaja (Aust & Barth, 2022). This package provides R Markdown templates and functions, meaning you can write a fully reproducible manuscript that combines prose and code. I will highlight the key features of papaja such as YAML options and knitting files, citations and references, reporting statistical tests, and creating tables and figures. I assume you have a basic understanding of using R and R Markdown, so the tutorial can focus on demonstrating what papaja can offer for preparing reproducible APA style manuscripts. [Note from Dale Barr, the organizer: In-person participation is strongly encouraged. If you need to participate in this session remotely, please contact me by email or Teams DM. Please arrive promptly. To ensure all participants are mutually aware of who is attending, the meeting will be locked at 12:05 and remote participants will be asked to turn their cameras on temporarily so that they can be greeted and welcomed into the discussion.] |
Wed, Nov 16, 2022 12:00-13:00 |
Level 6, Meeting Room | ||||||
Alistair Beith University of Glasgow |
Building Online Auditory Experiments with Accurate Reaction Times
How much trust are you willing to put in the audio equipment that a participant uses to take part in your online study? Many experiment building tools create studies that perform well in the lab but with no way of knowing how they perform in the wild. The solution to this problem is not just to improve the performance, but also to measure it. This tutorial will present a selection of tools for building your own auditory experiments with jsPsych and measuring the timing accuracy of responses. With these tools you can present auditory stimuli with sample level precision and collect responses with well defined errors. No knowledge of jsPsych or participation is required. [Note from Dale Barr, the organizer: In-person participation is strongly encouraged. If you need to participate in this session remotely, please contact me by email or Teams DM.] |
Wed, Nov 09, 2022 12:00-13:00 |
Level 5, Seminar Room | ||||||
Elaine Jackson and Martin Lages |
A Meta-Analysis on Juror Decisions
We are interested in whether the English verdict system (guilty/not
guilty) and Scottish verdict system (guilty/not proven/not guilty)
affects conviction rates as reported in ten studies with different crime
types (e.g., murder, assault, rape) and a total of 1,986 jurors. The
results of a logistic regression with mixed effects suggest a
significant decrease of convictions for the Scottish compared to the
English verdict system. We discuss some implications of this result.
|
Wed, Nov 02, 2022 12:00-13:00 |
Level 5, Seminar Room | ||||||
Lisa
DeBruine University of Glasgow |
Intro to Code Review
Research transparency and integrity benefit greatly from computationally reproducible code, and there is an increasing emphasis on learning the skills to code. However, there hasn’t been as much emphasis on learning the skills to check code. People cite a lack of time, expertise, and incentives as reasons that they don’t ask others to review their research code, but the most commonly cited reason was embarrassment for others to see their code. In this introductory session, I will cover the goals of code review, some key concepts, and present checklists for preparing your code to be reviewed and for reviewing others’ code. We will also do a mock review of my own code. The hope is that code check sessions can become a regular part of the M&Ms seminar series. NOTE: This is an IN PERSON event (preferred) with a hybrid option. If you wish to attend remotely, please contact the organiser (Dale Barr) for the link. |
Wed, Oct 19, 2022 12:00-13:00 |
Level 6, Meeting Room | ||||||
Robin
Ince University of Glasgow |
Version control for data [ONLINE ONLY EVENT]
Git is now widely used for tracking and managing changes to source code
and text, but systematically tracking changes to data is less common.
There are now a range of tools available to track data from acquisition,
through analysis to results. This informal session will give a quick
overview of tools I looked at to address this problem, including Git
LFS, git-annex, DVC and Datalad. I briefly cover the features of each of
these and why I chose DVC. I will then do a short demo of using DVC to
track changes to data, and keep data in sync between collaborators or
different computers. You can install DVC if you want to follow along
(not required): https://dvc.org/doc/install Time permitting, we will
talk a bit about DVC functionality around defining reproducible analysis
pipelines.
|
Wed, Oct 12, 2022 12:00-13:00 |
Zoom | ||||||
Dale Barr University of Glasgow |
Brainstorming Meeting
This will be a planning meeting for the 2022-3 Methods & Metascience (M&Ms) “brown bag” journal club. Do you have ideas for M&Ms training events, workshops, speakers, or papers to discuss? Are there particular topics you would like to see covered? Please come to our brainstorming meeting with your ideas and wishes for the coming year. This meeting will be IN PERSON in the Level 6 Meeting Room. If you cannot attend, please send any thoughts you have to Dale. Feel free to bring your lunch! |
Wed, Oct 05, 2022 12:00-13:00 |
Level 6, Meeting Room | ||||||
Chris Hartgerink Liberate Science |
ResearchEquals.com - step by step publishing
Open science brings many changes, yet publishing remains the same. As a result, many improvements in the research and education process can’t fulfill their promises. In order to facilitate a rapidly changing research ecosystem, ResearchEquals allows researchers to publish whatever outputs their work creates, instead of working to create outputs that can be published. Building on open infrastructures, ResearchEquals allows you to publish over 20 different types of research modules, with more being added based on the needs from you. Example modules include theory, study materials, data, or software. However, other outputs, like presentations, figures, or educational resources can also be published. All of these research steps are linked together, to create a research journey - recognizing that educational materials, research design and analysis are all part of our learning journeys. In this session you will get an introduction to ResearchEquals and how to join to co-create the platform. Chris Hartgerink (he/him/they) is the founder of Liberate Science, an organization repairing knowledge distribution. He was awarded his PhD in meta-research (Tilburg University; 2020) and now applies that knowledge to create practical systems for researchers to make better research easier. |
Wed, Apr 20, 2022 16:00-17:00 |
Zoom | ||||||
Phil
McAleer University of Glasgow |
Tables tutorial
This week we focus on tables for the M&Ms DataViz series. Phil will lead us through the tutorial Tutorial Riding tables with {gt} and {gtExtras} by Benjamin Nowak. The {gt} extension already allowed to easily create tables from raw dataset, but now the {gtExtras} extension adds many customization options. Here we will illustrate the possibilities of these packages with TidyTuesday dataset on Tour de France riders, extracted from Alastair Rushworth’s {tdf} extension. |
Wed, Apr 06, 2022 16:00-17:00 |
Zoom | ||||||
Kevin Guyan |
Queer Data
[Affiliated talk from LGBTQIA+ Reading Group] Data has never mattered more. Our lives are increasingly shaped by it and how it is defined, collected and used. But who counts in the collection, analysis and application of data? Kevin Guyan will discuss key themes from his new book Queer Data: Using Gender, Sex and Sexuality Data for Action including the relationship between data and visibility, the politics of who and how to count, and how data biases are used to delegitimise the everyday experiences of queer communities. This is a hybrid event - register for online or in-person attendance via Eventbrite. https://www.eventbrite.co.uk/e/reading-group-kevin-guyan-queer-data-tickets-256117744067 |
Wed, Mar 30, 2022 11:00-12:00 |
Zoom | ||||||
Balazs Aczel &
Marton Kovacs ELTE University, Budapest |
Developing tools and practices to promote open and efficient science
In this talk, I’ll introduce some new tools that aim to improve the efficiency of researchers’ work and the accumulation of knowledge. I’ll argue that minimizing extra workload and increasing the ease of use have key importance at the introduction of new research practices. The tools that I’ll share are: (1) The Transparency Checklist, a consensus-based general ShinyApp checklist to improve and document the transparency of research reports; (2) Tenzing, a solution to simplify the CRediT-based documentation and reporting the contributions to scholarly articles; (3) SampleSizePlanner, an app and R package that offer nine different procedures to determine and justify the sample size of a study design; and (4) the Multi-analyst guidance, a consensus-based guide for conducting and documenting multi-analyst studies. |
Wed, Mar 16, 2022 16:00-17:00 |
Zoom | ||||||
Lisa DeBruine University of Glasgow |
Discussion of “Fooled by beautiful data”
Continuing the
M&Ms DataViz
series with a discussion of “Fooled by beautiful data: Visualization
aesthetics bias trust in science, news, and social media”
(Lin & Thornton, 2021).
|
Wed, Mar 09, 2022 16:00-17:00 |
Zoom | ||||||
James
Bartlett University of Glasgow |
The Science of Visual Data Communication: What Works
Starting off the
M&Ms DataViz
series with a discussion of “The Science of Visual Data
Communication: What Works”
(Franconeri et al.,
2021).
|
Wed, Feb 23, 2022 16:00-17:00 |
Zoom | ||||||
Chris Hartgerink Liberate Science |
ResearchEquals.com - step by step publishing
Open science brings many changes, yet publishing remains the same. As a result, many improvements in the research and education process can’t fulfill their promises. In order to facilitate a rapidly changing research ecosystem, ResearchEquals allows researchers to publish whatever outputs their work creates, instead of working to create outputs that can be published. Building on open infrastructures, ResearchEquals allows you to publish over 20 different types of research modules, with more being added based on the needs from you. Example modules include theory, study materials, data, or software. However, other outputs, like presentations, figures, or educational resources can also be published. All of these research steps are linked together, to create a research journey - recognizing that educational materials, research design and analysis are all part of our learning journeys. In this session you will get an introduction to ResearchEquals and how to join to co-create the platform. Chris Hartgerink (he/him/they) is the founder of Liberate Science, an organization repairing knowledge distribution. He was awarded his PhD in meta-research (Tilburg University; 2020) and now applies that knowledge to create practical systems for researchers to make better research easier. |
Wed, Feb 16, 2022 16:00-17:00 |
Zoom | ||||||
Lisa DeBruine University of Glasgow |
PsyTeachR Glossary Hackathon
Learn about the open-source and crowdsourced PsyTeachR glossary and how to contribute. We’ll lead a hands-on session where you can help identify terms that need to be added or definitions that need more detail. You’ll learn how to file issues on github (but you can also participate without a github account). Zoom: [Zoom link hidden. Please log in to view it, or contact the event organiser] Glossary: https://psyteachr.github.io/glossary/ Resources: https://docs.google.com/document/d/1FOohcEbWTAXB7OWSvMZzwMI3F--pNNRT3_3DSB6KeB8/edit# |
Wed, Feb 09, 2022 16:00-17:00 |
Zoom | ||||||
Robin Ince University of Glasgow |
Within-participant statistics, prevalence and reproducibility
Within neuroscience, psychology, and neuroimaging, the most frequently used statistical approach is null hypothesis significance testing (NHST) of the population mean. An alternative approach is to perform NHST within individual participants and then infer, from the proportion of participants showing an effect, the prevalence of that effect in the population. We propose a novel Bayesian method to estimate such population prevalence that offers several advantages over population mean NHST. This method provides a population-level inference that is currently missing from study designs with small participant numbers, such as in traditional psychophysics and in precision imaging. Bayesian prevalence delivers a quantitative population estimate with associated uncertainty instead of reducing an experiment to a binary inference. Bayesian prevalence is widely applicable to a broad range of studies in neuroscience, psychology, and neuroimaging. Its emphasis on detecting effects within individual participants can also help address replicability issues in these fields. https://elifesciences.org/articles/62461 |
Wed, Feb 02, 2022 16:00-17:00 |
Zoom | ||||||
Dan Quintana University of Oslo, Biological Psychiatry |
The increasing the credibility of meta-analysis
A very M&Ms-relevant talk hosted by the Workshop in Quantitative Methods: Several scientific fields are experiencing a reproducibility crisis, in which hypothesis-driven studies are failing to replicate. Poor reproducibility has been linked to several factors, but some of the most pertinent issues are analytical flexibility and lack of transparency. While these issues are becoming more well-known for primary research, it has received less attention for meta-analysis. In this talk, Daniel will present remedies for analytical flexibility, such as pre-registration, and ways to increase the transparency of analyses. Together, reducing analytical flexibility and increasing transparency will improve the credibility of meta-analysis. |
Wed, Jan 26, 2022 13:00-14:00 |
Zoom | ||||||
Clara Weale |
Introducing the opportunities and adventures at the Glasgow Library of Olfactory Material |
Wed, Jan 19, 2022 16:00-17:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club: Make it Pretty
We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form. In this last session, we’ll finish the app and make it visually appealing and all your own style. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials. |
Wed, Dec 15, 2021 14:00-15:00 |
Zoom | ||||||
Thomas Varley Indiana University |
Intersectional synergies: untangling irreducible effects of intersecting
identities via information decomposition
The idea of intersectionality has become a frequent topic of discussion both in academic sociology, as well as among popular movements for social justice such as Black Lives Matter, intersectional feminism, and LGBT rights. Intersectionality proposes that an individual’s experience of society has aspects that are irreducible to the sum of one’s various identities considered individually, but are “greater than the sum of their parts.” In this work, we show that the effects of intersectional identities can be statistically observed in empirical data using information theory. We show that, when considering the predictive relationship between various identities categories such as race, sex, and income (as a proxy for class) on outcomes such as health and wellness, robust statistical synergies appear. These synergies show that there are joint-effects of identities on outcomes that are irreducible to any identity considered individually and only appear when specific categories are considered together (for example, there is a large, synergistic effect of race and sex considered jointly on income irreducible to either race or sex). We then show using synthetic data that the current gold-standard method of assessing intersectionalities in data (linear regression with multiplicative interaction coefficients) fails to disambiguate between truly synergistic, greater-than-the-sum-of-their-parts interactions, and redundant interactions. We explore the significance of these two distinct types of interactions in the context of making inferences about intersectional relationships in data and the importance of being able to reliably differentiate the two. Finally, we conclude that information theory, as a model-free framework sensitive to nonlinearities and synergies in data, is a natural method by which to explore the space of higher-order social dynamics. https://arxiv.org/abs/2106.10338 |
Wed, Dec 08, 2021 16:00-17:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club
We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form. In this seventh session, we will: make sure the user interface has a clear design, add help messages, and clean user inputs. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials. |
Wed, Dec 08, 2021 14:00-15:00 |
Zoom | ||||||
M&Ms |
Cancelled due to strike |
Wed, Dec 01, 2021 16:00-17:00 |
|||||||
Coding Club |
Cancelled due to strike |
Wed, Dec 01, 2021 14:00-15:00 |
|||||||
Paper
Discussion University of Glasgow |
Pushback against GLMMs
Paper discussion: Gomila, R. (2021). Logistic or linear? Estimating causal effects of experimental treatments on binary outcomes using regression analysis. Journal of Experimental Psychology: General, 150(4), 700–709. https://doi.org/10.1037/xge0000920 Knief U, Forstmeier W. (2021). Violating the normality assumption may be the lesser of two evils. Behav Res Methods. https://doi.org/10.3758/s13428-021-01587-5 William H. Ryan, Ellen R. K. Evers, Don A. Moore; Poisson Regressions: A Little Fishy. Collabra: Psychology 4 January 2021; 7 (1): 27242. https://doi.org/10.1525/collabra.27242 |
Wed, Nov 24, 2021 16:00-17:00 |
Zoom | ||||||
Lisa DeBruine University of Glasgow |
Coding Club: Input Interface
We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form. In this fifth session, we will create an input tab so people can add their ratings of the books. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials. |
Wed, Nov 24, 2021 14:00-15:00 |
Zoom | ||||||
Christoph
Scheepers University of Glasgow |
Effect size and power in (Generalized) Linear Mixed Models of response
time data
In their oft-cited tutorial on power and effect size in mixed effects models, Brysbaert and Stevens (2018; henceforth B&S18) make some important observations about the power of typical RT experiments with counterbalanced repeated-measures designs. They show that the traditional (and by now largely outdated) approach of performing repeated-measures ANOVAs on subject- respectively item-aggregated data tends to grossly over-estimate effect sizes in experiments of this kind. Indeed, when the same data are analysed with more state-of-the-art Linear Mixed Models (LMMs), which require trial-level observations, typical RT effects come out very small by common effect size standards. Based on their simulations, B&S18 recommend that RT experiments with repeated measures at subject and item level should have at least 1,600 observations in each condition (e.g., 40 participants × 40 items per design cell) to ensure sufficient power for small effects. However, B&S18 stop short of assessing effect size and power for more advanced alternatives to LMM which, in the context of analysing RT data, not only make more theoretical sense, but also bear the prospect of being more sensitive to cross-condition differences in RT. Indeed, Lo and Andrews (2015) raise a number of points against fitting LMMs to RT data, regardless of whether or not such data are transformed to make them more compliant with the normal distribution requirement in LMM. Instead, they recommend that RT data be modelled via Generalized Linear Mixed Models (GLMMs) of either the Inverse Gaussian or the Gamma model family to more appropriately account for the positive skew in RT distributions. Moreover, they recommend using the Identity link in such models to maintain the assumption of linear relations between predictor and outcome variables. In this talk, I will present re-analyses of an ‘overpowered’ dataset (1,020 subjects × 420 items) that was also used in B&S18. I will show that LMMs on raw or transformed RTs produce much smaller estimates of population effect sizes (ds ~ 0.1 for a 16-millisecond priming effect) than Inverse Gaussian or Gamma GLMMs (ds ~ 0.4). This is mainly due to sub-optimal fits achieved by the former: Residuals account for more than 55% of the random variance in LMMs, but for practically 0% in GLMMs (in the latter, virtually all of the random variability in the data is explained by subject- and item-related random effects). Implications for power- and meta-analysis will be discussed. |
Wed, Nov 17, 2021 16:00-17:00 |
Zoom | ||||||
Jack Taylor University of Glasgow |
Rating norms should be calculated from cumulative link mixed effect
models
Studies which provide norms of Likert ratings typically report per-item
summary statistics. Traditionally, these summary statistics comprise the
mean and the standard deviation (SD) of the ratings, and the number of
observations. Such summary statistics can preserve the rank order of
items, but provide distorted estimates of the relative distances between
items because of the ordinal nature of Likert ratings. Inter-item
relations in such ordinal scales can be more appropriately modelled by
cumulative-link mixed effects models (CLMMs). In a series of
simulations, and with a reanalysis of an existing rating norms dataset,
we show that CLMMs can be used to more accurately norm items, and can
provide summary statistics analogous to the traditionally reported means
and SDs, but which are disentangled from participants’ response biases.
CLMMs can be applied to solve important statistical issues that exist
for more traditional analyses of rating norms.
|
Wed, Nov 10, 2021 16:00-17:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club: Visualising Data
We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form. In this fourth session, we’ll create an interactive plot (maybe a map?) that visualises the form data. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials. |
Wed, Nov 10, 2021 14:00-15:00 |
Zoom | ||||||
Dale
Barr University of Glasgow |
A Gentle Introduction to Generalized Linear Mixed Models
Part of the GLMM series
|
Wed, Nov 03, 2021 16:00-17:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club: Filtering Data
We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form. In this third session, we will filter the book table by genre, and create a dynamic filtering input for location. See https://psyteachr.github.io/shiny-tutorials/coding_club.html for the schedule and materials. |
Wed, Nov 03, 2021 14:00-15:00 |
Zoom | ||||||
Amanda
Ptolomey University of Glasgow |
Encounters with zine making in participatory research
Zines are small DIY booklets or magazines, and making them requires no creative or specialist skills - just about anyone can make a zine. This seminar will explore zine-making as creative research method. Drawing on my work facilitating zine-making with children, disabled young women, and adults, I reflect on the power of zines in the research encounter as media for agency and resistance, and as canvasses for self-expression through and beyond words. Bio Based in the Strathclyde Centre for Disability Research at the University of Glasgow, Mindy (Amanda) Ptolomey specialises in developing creative feminist methods to work with routinely excluded people. Alongside writing up her doctoral research developing zine-making as a creative feminist method with disabled young women, Mindy is currently contributing her expertise on zine-making two additional research projects - Back Chat: developing arts based methods of knowledge generation and exchange with children in times of crisis (with Helen Lomax and Kate Smith, University of Huddersfield) and Researchers Don’t Cry?! Locating, Articulating, Navigating and Doing Emotion in the Field’ (with Lisa Bradley and Nughmana Mirza, University of Glasgow). Mindy is also on the steering group for the UKRI Future Leaders funded study Following Young Fathers Further (PI Anna Tarrant, University of Lincoln) contributing her knowledge of zine-making and creative methods with children and young people. Mindy also facilitates young women focussed events including conferences, workshops, and film screenings as part of the feminist social science collective Girlhood Gang which she co-founded. Alongside her academic research Mindy has worked in community development and peacebuilding education for over a decade, designing and leading creative projects in Scotland and internationally. |
Wed, Oct 20, 2021 16:00-17:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club: Reading Data
We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form. In this second session, we’ll learn to read data from a Google Sheet. See https://psyteachr.github.io/shiny-tutorials/coding_club.html for the schedule and materials. |
Wed, Oct 20, 2021 14:00-15:00 |
Zoom | ||||||
Heather Cleland Woods, Phil McAleer, Lisa DeBruine, Kate
Reid University of Glasgow |
An ounce of prevention is worth a pound of cure (slides)
This seminar will highlight a few things you can do during the planning
stage of your project to help you develop healthy research habits, such
as thinking about formulating suitable research questions (qual and
quant), including positive and negative controls to test whether the
collected data make sense, justifying sample size and other design
decisions, or simulating data to create a concrete data cleaning and
analysis plan. The talk is aimed at undergraduate and MSc students just
starting their dissertations, but is applicable to anyone starting a
qualitative or quantitative research project.
|
Wed, Oct 13, 2021 15:00-16:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club: Setup
We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form. In this first session, we’ll set up a demo shiny app and decide what kind of data our forms will collect. See https://psyteachr.github.io/shiny-tutorials/coding_club.html for the schedule and materials. |
Wed, Oct 06, 2021 14:00-15:00 |
Zoom | ||||||
Dale Barr University of Glasgow |
Methods & Metascience Organizational Meeting for 2021-2022
Organisational meeting. All staff/students are welcome to come and help
plan activities of the methods centre for the coming year. Come and
discuss training needs, possible speakers and events, outreach
activities, and long-term plans.
|
Wed, Sep 29, 2021 14:00-15:00 |
Zoom | ||||||
Alan
Huebner University of Notre Dame |
Computational Tools and Applications for Generalizability Theory
Generalizability theory (G-theory) is a powerful, modern framework in which to conduct reliability analyses. The method allows the user to disentangle various sources of measurement error and find optimal measurement procedures. However, there has been a scarcity of user-friendly computer software resources for conducting basic and advanced G-theory analyses. This talk will give a brief overview of G-theory and current computational tools, discuss some applications to sports performance science, and demonstrate the use of the Gboot package in R for computing bootstrap confidence intervals for G-theory variance components and reliability coefficients: Code on github: https://github.com/alanhuebner10/Gboot.git |
Wed, Jun 16, 2021 14:00-15:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club
Code along and create a shiny app that simulates and plots data with
user-input parameters. In this third session, we will restructure the
simulation functions to keep a record of previously simulated data to
make faceted plots for comparison. Materials to get caught up are at https://psyteachr.github.io/shiny-tutorials/coding_club.html
|
Wed, Jun 09, 2021 14:00-15:00 |
Zoom | ||||||
Maureen
Haaker UKData Service/ University of Suffolk |
(Re)Using Qualitative Data: Getting the Most Out of Data (slides)
Maureen Haaker (UKData Service): (Re)Using Qualitative Data: Getting the
Most Out of Data
|
Wed, Jun 02, 2021 14:00-15:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club
Code along and create a shiny app that simulates and plots data with
user-input parameters. In this second session, we will learn how to
check the user input for errors, such as impossible values, and handle
these without crashing. Materials to get caught up are at https://psyteachr.github.io/shiny-tutorials/coding_club.html
|
Wed, May 26, 2021 14:00-15:00 |
Zoom | ||||||
Guillaume
Rousellet University of Glasgow |
Benefits of learning and teaching bootstrap methods (slides)
In this session I’ll try to achieve two goals: first, to give a brief introduction to bootstrap methods, focusing on the percentile bootstrap; second, to illustrate how teaching the bootstrap is a great opportunity to introduce or consolidate key statistical concepts. Notably, building a bootstrap distribution requires discussions about sampling and the data acquisition process. A bootstrap distribution can then be used to compute important quantitites: standard error, bias and confidence intervals, which can be discussed using graphical representations of bootstrap distributions. Teaching bootstrap methods also leads to the introduction of robust estimation, because contrary to a popular misconception, bootstrap methods are not robust on their own. Teaching bootstrap methods is an effective way to introduce Monte-Carlo simulations, requiring very little code alteration. Finally, consideration of bootstrap distributions is a great stepping stone to Bayesian posterior distributions, by considering a full distribution of plausible population values as opposed to a few summary statistics. https://journals.sagepub.com/doi/full/10.1177/2515245920911881 |
Wed, May 19, 2021 14:00-15:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club |
Wed, May 12, 2021 14:00-15:00 |
Zoom | ||||||
Crystal Steltenpohl, James Montilla
Doble, Dana Basnight-Brown Society for the Improvement of Psychological Science |
Global Engagement Task Force
The Society for the Improvement of Psychological Science (SIPS) Global Engagement Task Force Report outlines several suggestions, specifically around building partnerships with geographically diverse open science organizations; increasing SIPS presence at other, more local events; diversifying remote events; considering geographically diverse annual conference locations; improving membership and financial resources; and surveying open science practitioners from geographically diverse regions. Three members of this task force will speak about the report and the broader issues relevant to open science and scientific organisations.
|
Wed, May 05, 2021 14:00-15:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club Shiny Quiz 4 |
Wed, Apr 28, 2021 14:00-15:00 |
Zoom | ||||||
Brianna Dym University of Colorado, Boulder |
Learning CS through Transformative Works: A perspective on
underrepresented groups learning to code
Computer science classrooms are notorious for driving out underrepresented groups like women, people of color, and LGBTQ+ people. Though not intentional nor malicious, there are many social elements that contribute to these chronic under-representations in computer science classrooms across all academic levels. By investigating alternative paths to broaden participation in computing, we interviewed folks from underrepresented groups in computing and found that many people thought computer science classes were too boring, too hard, and had specific stereotypes about what “counts” as computer science in the first place. Instead, many of our participants had learned their computational skills from working on fun projects as part of a fan community. This talk explores what some of those projects are, why they were motivating to our participants, and what it might look like to bring projects from fandom into the classroom. Relevant paper: https://cmci.colorado.edu/~cafi5706/SIGCSE2021_ComputationalLabor.pdf |
Wed, Apr 21, 2021 14:00-15:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club Shiny Quiz 3
This week, we will continue working on the Buzzfeed-style quiz app. https://psyteachr.github.io/shiny-tutorials/coding_club.html |
Wed, Apr 14, 2021 14:00-15:00 |
Zoom | ||||||
Martin Lages University of Glasgow |
A Hierarchical Signal Detection Model with Unequal Variance for Binary
Data
Gaussian signal detection models with equal variance are typically used for detection and discrimination data whereas models with unequal variance rely on data with multiple response categories or multiple conditions. Here a hierarchical signal detection model with unequal variance is suggested that requires only binary responses from a sample of participants. Introducing plausible constraints on the sampling distributions for sensitivity and response criterion makes it possible to estimate signal variance at the population level. This model was applied to existing data from memory and reasoning tasks and the results suggest that parameters can be reliably estimated, allowing a direct comparison of signal detection models with equal- and unequal-variance. https://psyarxiv.com/tbhdq/ |
Wed, Apr 07, 2021 14:00-15:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club: Shiny Quiz 2
This week, we will continue working on the Buzzfeed-style quiz app to add code that records choices and controls the question progression. Prep for this meeting:
You can access all materials at the coding club website. |
Wed, Mar 31, 2021 14:00-15:00 |
Zoom | ||||||
Charles
Gray Newcastle University |
It’s good enough to fail at reproducible science (Charles Gray)
As scientists, it’s desirable to demonstrate to others (including our future selves) that our computational scientific work validates the claims we make in published research. Being able to reproduce the work is a key component of easily validating an analysis, as well as extending the work. Reproducibility is not as simple, however, as sharing the code and data that produced our analysis. We must consider how our data and algorithms are understood, variables defined, etc. Our code needs to be accessible and interoperable. There is much guidance about best practice, but this is often overwhelming for those of us who are not from formal computer science training. In this talk I share the story of how I attempted good enough practice in my doctoral thesis, and found even this was simply too much to achieve. Indeed, despite failing to achieve good enough scientific practice in any aspect of reproducibility, I discovered a delightful world of scientific workflows in R that make every day just that bit easier now. Failing at reproducible science in R is the best we can aim for; we will never achieve best practice in every aspect of reproducibility, and if we failed, then we tried. Through this process we discover useful tools to make the next time easier. Bio: Dr Charles T. Gray is a postdoc with the evidence synthesis lab at Newcastle University in the UK. She is currently specialising in Bayesian network meta-analysis in Cochrane intervention reviews. She has three degrees: arts, music, and mathematics. None of these prepared her for writing a dissertation in interdisciplinary computational metascience doctoral thesis in R. |
Wed, Mar 24, 2021 14:00-15:00 |
Zoom | ||||||
Lisa
DeBruine University of Glasgow |
Coding Club: Shiny Quiz
Join us to learn more about coding with fun tasks. This week, we’ll be
working on a shiny app for collecting quiz data. Download the template
at https://github.com/debruine/shiny_template to code
along, or just join and watch. The full code for the app will be
available after the session.
|
Wed, Mar 17, 2021 14:00-15:00 |
Zoom | ||||||
Dale Barr University of Glasgow |
Time-varying trends in repeated measures data: Model, control, or
ignore?
Speaker: Dale Barr Abstract: Almost all experiments in psychology and neuroscience involve repeated measurements taken over time. Very often, measurements from individual subjects show temporal variation due to such factors as practice, fatigue, mind wandering, or environmental distraction. However, in our statistical analyses we generally ignore this temporal variation. Under what circumstances is this a problem? And what are the potential benefits of modeling trial-by-trial fluctuations in performance? I will discuss pros and cons of various options for time-series modeling, including non-linear mixed-effects models, polynomial regression, and generalized additive mixed models (GAMMs). I will also discuss the potential for boosting power through experimental design, including results from a new project that is attempting to improve upon conventional randomization strategies. |
Wed, Mar 10, 2021 14:00-15:00 |
Zoom | ||||||
Robin Ince University of Glasgow |
Discussion: Edgington & Onghena, Randomization Tests
For more details, including how to access the readings, see event
announcement on Microsoft Teams
|
Fri, Oct 16, 2020 15:30-16:30 |
Zoom | ||||||
Jack Taylor University of Glasgow |
Bayesian Estimation of Hierarchical Models with Distributional
Parameters (slides)
The recent M&Ms have led to an interesting debate on sample size and modelling variability. You might know that linear mixed effects models let you look at subject/item variability with random intercepts and slopes, but what if you want to model more than just the mean? What if you want to know how subjects/items differ in variance? Have no fear, Bayesian estimation is here! With Bayesian estimation you can build a single model that estimates the effect of your predictors on any parameters of whatever distribution your data takes. Throw in some random effects and hey presto! You’ve got a mixed effects model with distributional parameters. I’ll show an example of a model estimating the same maximal random effects structure for multiple distributional parameters of a non-Gaussian (shifted log-normal) distribution, which lets us look at subject/item variability of the intercepts and slopes of each parameter. This paper shows some advantages of such models in accurately estimating effects and subject variability, and makes some interesting links to the large/small N debate: Rouder, J. N., Lu, J., Speckman, P., Sun, D., and Jiang, Y. (2005). A hierarchical model for estimating response time distributions. Psychonomic Bulletin and Review, 12(2), 195-223. http://doi.org/10.3758/BF03257252 |
Fri, Jul 03, 2020 15:30-16:30 |
Zoom | ||||||
Casper Hyllested University of Glasgow |
Generalizability Theory: Understanding variance across multiple facets
in research (slides)
Several fields in social sciences are facing a replication crisis. This can be partially attributed to sub-optimal experimental designs, yet it is in large part also due to the inherent differences between people and populations. When an experiment is done, and disproportionate consideration is given to how well results can translate between different occasions, raters, tests or any number of potential facets, the likelihood that the findings are reliable decreases greatly. One way to quantify this is through Generalizability Theory (GT), which analyzes how much variance in results can be attributed to any chosen facets of generalization through considering both main and interaction effects. Recently it has been used in repeated measures experiments to determine whether tests, subscales or even individual items measure states or traits in individuals through a signal-noise ratio calculation named the Trait Component Index (TCI). In this talk I will first go through the basic precepts and theories underlying GT, and then focus primarily on the breath of application to research in psychology. I will demonstrate this through a brief range of results in previous studies and will then cover the results from my own dissertation in greater detail. Finally, and perhaps most importantly I will outline some of the current limitations of GT and consider alternative methods in an effort to avoid replacing one default analysis with another, but instead highlight the advantages of a broader methodological repertoire in research. Some optional background reading Casper has suggested: Bloch, R., & Norman, G. R. (2012). Generalizability theory for the perplexed: A practical introduction and guide: AMEE Guide No. 68. Medical Teacher, 34(11), 960-992. Medvedev, O. N., Krägeloh, C. U., Narayanan, A., & Siegert, R. J. (2017). Measuring Mindfulness: Applying Generalizability Theory to Distinguish between State and Trait. Mindfulness, 8(4), 1036 - 1046. Steyer, R., Mayer, A., Geiser, C., & Cole, D. A. (2015). A Theory of States and Traits-Revised. Annual Review of Clinical Psychology, 11(1), 71-94. Casper’s dissertation: https://smallpdf.com/shared#st=15d7fde3-6d68-46b2-af62-f18f7016bb1a&fn=2269064h+final+dissertation+draft-konverteret.pdf&ct=1592549319184&tl=word&rf=link |
Fri, Jun 26, 2020 15:30-16:30 |
Zoom | ||||||
Mine Çetinkaya-Rundel University of Edinburgh, Duke University, and RStudio |
The art and science of teaching data science
Modern statistics is fundamentally a computational discipline, but too
often this fact is not reflected in our statistics curricula. With the
rise of data science it has become increasingly clear that students
want, expect, and need explicit training in this area of the discipline.
Additionally, recent curricular guidelines clearly state that working
with data requires extensive computing skills and that statistics
students should be fluent in accessing, manipulating, analyzing, and
modeling with professional statistical analysis software. In this talk,
we introduce the design philosophy behind an introductory data science
course, discuss in progress and future research on student learning as
well as new directions in assessment and tooling as we scale up the
course.
|
Thu, Jun 18, 2020 12:00-13:00 |
Zoom | ||||||
|
Discussion of Smith & Little (2018), “In defense of the small N
design”
Following up on Robin Ince’s suggestion, we will meet to discuss the following paper: Smith, P.L., Little, D.R. Small is beautiful: In defense of the small-N design. Psychon Bull Rev 25, 2083–2101 (2018). https://doi.org/10.3758/s13423-018-1451-8 There will be a brief summary of the paper followed by commentary and group discussion. |
Fri, Jun 12, 2020 15:30-16:30 |
Zoom | ||||||
Robin Ince University of Glasgow |
Reconsidering population inference from a prevalence perspective
Within neuroscience, psychology and neuroimaging, it is typical to run an experiment on a sample of participants and then apply statistical tools to quantify and infer an effect of the experimental manipulation in the population from which the sample was drawn. Whereas the current focus is on average effects (i.e. the population mean, assuming a normal distribution [1]), it is equally valid to ask the alternative question of how typical is the effect in the population[2]? That is, we infer an effect in each individual participant in the sample, and from that infer the prevalence of the effect in the population[3–6]. We propose a novel Bayesian method to estimate such population prevalence, based on within-participant null-hypothesis significance testing (NHST). Applying Bayesian population prevalence estimation in studies sufficiently powered for NHST within individual participants could address many of the issues recently raised regarding replicability[7]. Bayesian prevalence provides a population level inference currently missing for designs with small numbers of participants, such as traditional psychophysics or animal electrophysiology[8,9]. Since Bayesian prevalence delivers a quantitative estimate with associated uncertainty, it avoids reducing an entire experiment to a binary inference on a population mean[10].
| ||||||||
Group Discussion |
Variability in the analysis of a single neuroimaging dataset by many
teams
Please join us to discuss the following paper:
Botvinik-Nezer, R., Holzmeister, F., Camerer, C.F. et al. Variability in the analysis of a single neuroimaging dataset by many teams. Nature (2020). https://doi.org/10.1038/s41586-020-2314-9 Abstract: “Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed.” Katarina Moravkova will provide a summary of the paper at the start of the session. |
Fri, May 29, 2020 15:30-16:30 |
Zoom | ||||||
|
Discussion of Yarkoni, “The Generalizability Crisis”
In this meeting we will discuss Yarkoni’s paper “The Generalizability Crisis”, along with a critique by Lakens and Yarkoni’s response. The format will be a general overview followed by discussion in small randomly allocated breakout rooms. The meetings will be held via Zoom. See the M&Ms Teams channel for a link. Pre-print by Tal Yarkoni: https://psyarxiv.com/jqw35 Review by Daniel Lakens: http://daniellakens.blogspot.com/2020/01/review-of-generalizability-crisis-by.html Rebuttal by Yarkoni: https://www.talyarkoni.org/blog/2020/05/06/induction-is-not-optional-if-youre-using-inferential-statistics-reply-to-lakens/ |
Fri, May 15, 2020 15:30-16:30 |
Zoom | ||||||
Group Discussion |
Gelfert (2016), “How to Do Science with Models”, Chapters 1 to 3
(pp. 1-70)
For this Zoom meeting, the host will provide a quick summary of Gelfert (2016), “How to Do Science with Models”, Chapters 1 to 3 (pp. 1-70). This will be followed by small group discussion in randomly allocated breakout rooms. To find out about accessing Gelfert (2016) and to get the Zoom link please see the M&Ms channel on Microsoft Teams or reply to the announcement email. |
Fri, May 01, 2020 15:30-16:30 |
Zoom | ||||||
Jessie Sun University of California at Davis |
Eavesdropping On Everyday Life
This talk considers the unique insights that can be gained by combining multiple methods for studying daily life. In the Personality and Interpersonal Roles Study (PAIRS), 300 participants completed experience sampling method (ESM) self-reports while wearing the Electronically Activated Recorder (EAR), an unobtrusive audio recording device, for one week. Over the past five years, nearly 300 research assistants have transcribed and coded participants’ behaviors and environments from over 300,000 EAR audio files. To provide two examples of questions that can only be answered by capturing actual behavior alongside ESM self-reports, I will describe two projects that have resulted from this dataset: 1) Do people have self-knowledge of their momentary personality states, and 2) What are people doing when they miss experience sampling reports? I will conclude by discussing the opportunities and challenges of implementing open practices with this highly-identifiable and repeatedly-used dataset. |
Fri, Feb 28, 2020 12:00-13:00 |
Level 5, Seminar Room | ||||||
Ruud
Hortensius University of Glasgow |
Tools for reproducible fMRI analysis
On our way to transparent and reproducible neuroimaging we need to consider data and code in combination with the publication. In this talk, I will introduce a series of tools that the field has developed (BIDS, Heudiconv, MRIQC, MRIQCeption, fMRIprep, OpenNeuro, NeuroVault) that will not only help to achieve this goal of fully reproducible neuroimaging but also make a neuroimagers life easier. Note: while the tools are focussed on neuroimaging the principle holds for behavioural, eye-tracking and physiological measures (e.g., BIDS is applicable to these measures). |
Wed, Feb 19, 2020 14:00-15:00 |
Level 5, Seminar Room | ||||||
Lisa DeBruine University of Glasgow |
R Workshop, part 2: Understanding Mixed-Effects Models through Data Simulation (materials) |
Wed, Feb 05, 2020 14:00-15:00 |
Level 5, Seminar Room | ||||||
Lisa DeBruine University of Glasgow |
R Workshop, part 1: Introduction to Data Simulation (materials) |
Tue, Jan 28, 2020 13:00-14:00 |
Level 5, Seminar Room | ||||||
Zoltan
Dienes University of Sussex |
How to obtain evidence for the null hypothesis (paper)
To get evidence for or against one’s theory relative to the null
hypothesis, one needs to know what it predicts. The amount of evidence
can then be quantified by a Bayes factor. It is only when one has
reasons for specifying a scale of effect that the level of evidence can
be specified for no effect (that is, non-significance is not a reason
for saying there is no effect). In almost all papers I read people
declare absence of an effect while having no rational grounds for doing
so. So we need to specify what scale of effect our theory predicts.
Specifying what one’s theory predicts may not come naturally, but I show
some ways of thinking about the problem. I think our science will be
better for it!
|
Wed, Jan 22, 2020 14:00-15:00 |
Level 5, Seminar Room | ||||||
Guillaume
Rousselet University of Glasgow |
Why interactions are difficult to interpret and many are simply
uninterpretable (slides)
References: [1] On interpretation of interactions https://link.springer.com/content/pdf/10.3758/BF03197461.pdf [2] On the interpretation of removable interactions: A survey of the field 33 years after Loftus https://link.springer.com/article/10.3758/s13421-011-0158-0 [3] Interactions in logistic regression models https://janhove.github.io/analysis/2019/08/07/interactions-logistic |
Wed, Jan 15, 2020 14:00-15:00 |
Level 5, Seminar Room | ||||||
|
Is There a Generalizability Crisis in Psychology and Neuroscience?
For this meeting we will discuss a controversial new paper by Tal
Yarkoni, “The Generalizability Crisis,” available at https://psyarxiv.com/jqw35.
|
Wed, Jan 08, 2020 14:00-15:00 |
Level 5, Seminar Room | ||||||
Dale Barr University of Glasgow |
Containerize your code: Creating Reproducible Software Environments (slides) |
Wed, Nov 20, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Andrew
Burns Urban Studies, University of Glasgow |
Qualitative data – reaching the parts that statistics can’t?
An interactive seminar using data collected in an ethnography of
homelessness (using participant observation, observation, and
qualitative [walking] interviews). While originally used for an
anthropological analysis, we can use this seminar to explore the place
of such data in psychology (is there a place?) including the methods of
data collection, different approaches to coding/analysis, writing up,
and the reflexive researcher.
|
Tue, Oct 29, 2019 12:00-13:00 |
Level 5, Seminar Room | ||||||
Dr. Jo Neary & Kathryn Machray University of Glasgow |
Take a picture, it’ll last longer: Practical and ethical considerations
on using visual methods in qualitative research
For many people, taking and sharing photographs is an everyday behaviour, and is a way of sharing parts of your life (and your identity) with friends and family. This session discusses how research can utilise this everyday practice in a research setting, in order to shine a light on the elements of a participants’ life that may be inaccessible to traditional methods (such as surveys). In doing so, we explore the links between visual methods and ethnography, the use of visual methods in readdressing the power imbalance inherent in research, and some of the practical and ethical considerations of the method. This session will include two case studies from our research (children’s experience of neighbourhood demolition; men’s experience of food poverty), as well as participation from the audience. |
Tue, Oct 22, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Anne Scheel Eindhoven University of Technology |
Is Hypothesis Testing Overused in Psychology?
A central goal of many of the open-science reforms proposed in reaction to the replication crisis is to reduce false-positive results in the literature. Very often, they assume that research consists of confirmatory hypothesis tests and that ‘questionable research practices’ are ways in which researchers cut corners to present evidence in favour of hypotheses that may in fact be false. Two increasingly popular methods to prevent this from happening are preregistration and Registered Reports: Both require that authors state their hypotheses and analysis plan before conducting their study, which is supposed to prevent twisting the data to fit the narrative (e.g. p-hacking) or twisting the narrative to fit the data (hypothesising after results are known). In theory, this practice safeguards the validity of inferences drawn from hypothesis tests by removing any ‘wiggle room’ authors could exploit to produce spurious positive results. In practice, many psychologists seem to struggle to fit their research into this revised, now much more narrow framework of confirmatory hypothesis testing: Preregistration has been accused of stifling creativity, is described as difficult even by its proponents, and analyses of published preregistrations show that most do not sufficiently restrict the above-mentioned wiggle room. I want to argue that by making the very strict requirements of a confirmatory hypothesis test so explicit, preregistration and Registered Reports reveal that hypothesis testing may be the wrong tool for a substantial number of research questions in psychology. The conflation of exploratory and confirmatory research that psychologists have been used to may have stifled the development of a framework for high-quality exploratory research, which is the necessary basis for developing hypotheses in the first place. As such, resistance against preregistration and some of the growing pains the format is experiencing may simply be a consequence of it laying bare the misfit between research goals and the excessive focus on hypothesis testing in psychology. If this is true, psychologists may be well advised to shift this focus and work towards better literacy in the exploratory ground work that precedes confirmatory hypothesis tests. |
Fri, Oct 18, 2019 12:00-13:00 |
Level 5, Seminar Room | ||||||
Benedict Jones University of Glasgow |
Introducing p curve (slides)
P-curve is an alternative to traditional forms of meta-analysis that, in
principle, allows you tell whether or not a published literature
contains evidentiary value. This short session will (1) introduce the
method by using a preregistered case study from the face perception
literature and (2) discuss the advantages and disadvantages of the
p-curve method.
|
Wed, Oct 09, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Carolyn Saund University of Glasgow |
Crowdsourcing: The Good, The Bad, and The Ugly
An introduction to crowdsourcing methods for online surveys and
experiments. First, a brief introduction to crowdsourced data for the
skeptics: history, ethics, stats on data reliability, designing tasks,
what should and should not be done via the magic of the internet. Rife
with warnings, caveats, and cautionary tales. Secondly, a hands-on,
step-by-step introductory demo of designing and deploying tasks for
online workers. Not a workshop, just a demo, so no need to bring
laptops. Will your fears be assuaged, or validated? You’ll have to come
to find out!
|
Wed, Oct 02, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Jack Taylor and Lisa DeBruine University of Glasgow |
R Shiny Afternoon
Shiny is an R package that makes it easy to create interactive apps in R that can be deployed to the web (find out more here: https://shiny.rstudio.com/). Shiny apps are great for teaching concepts, presenting research results, and automating simple tasks. This four-hour workshop (1pm to 5pm) will cover: |
Tue, Sep 24, 2019 13:00-14:00 |
Level 5, Seminar Room | ||||||
|
Mixed effects models Q&A
If you have questions about mixed-effects modeling, here is an
opportunity to come and ask! Dale and Christoph will be there to field
your questions.
|
Wed, Jun 12, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Maria Gardani & Satu Baylan University of Glasgow |
Systematic reviews and meta analysis |
Wed, May 01, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Martin
Lages University of Glasgow |
Variance Constraints for Hierarchical Signal Detection Models
Bayesian models typically place uninformative or weakly informative
priors on parameters. Using a well-known data set on inductive and
deductive reasoning, it is illustrated how incorporating variance
constraints can help to estimate critical parameters and compare signal
detection models with equal and unequal variance.
|
Tue, Apr 23, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Robin Ince University of Glasgow |
A gentle introduction to the Partial Information Decomposition
In many cases in neuroimaging or data analysis we evaluate a statistical dependence in many variables, and find effects in more than one. For example, we might find an effect between a presented stimulus and neural responses in different spatial regions, time periods or frequency bands. A natural question in such situations is to what extent is the statistical relationship in the two responses common, or overlapping, and to what extent is there a unique effect in one response that is not related to the other response. An information theoretic framework called the Partial Information Decomposition (PID) has been proposed to address these questions. The first part of the session will be a gentle introduction to information theoretic quantification of statistical interactions, introducing co-information, redundancy, synergy and the basic theory of the PID, as well as introducing some applications (including, interactions between neural responses, interactions between multi-modal stimulus features in speech, interactions between neural responses and behaviour and predictive model comparison). The second part of the session will go into a bit more detail into the details of the implementation of the PID including the theory and computation of the Iccs redundancy measure, and more discussion of issues such a misinformation (negative unique information), applications etc. There will be a break between the two parts to give people the chance to opt-out of the more technical second part. |
Wed, Apr 10, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Anna
Henschel University of Glasgow |
Adventures with rtweet and tidytext
We have always suspected it: Twitter and #rstats go hand in hand. In
this tutorial we will do some web-scraping with the rtweet package and
look at various fun ways to analyse this data taking advantage of the
tidytext package. Join this tutorial if, for example, you want to learn
how to make a colourful word cloud out of everything you ever tweeted or
if you want to plot what people have been saying about the Great British
Bake Off over time.
|
Wed, Apr 03, 2019 14:00-15:00 |
CCNI Analysis Suite | ||||||
Lisa DeBruine University of Glasgow |
Simple packages and unit tests in R |
Wed, Mar 27, 2019 14:00-15:00 |
CCNI Analysis Suite | ||||||
Carolyn Saund University of Glasgow |
Version Control and Beyond: How Git Will Change Your Life
Commits? Repositories? Branches? Git has a wide vocabulary that isn’t
always intuitive, but that’s no reason to be scared away. From
understanding branches and version control, to collaborative
repositories and undoing history, this is an introduction to concepts
behind git and Github. Open source software is the future of data
science and scientific analyses, so now is the chance to gain a basic
understanding of the magic behind remote and online collaboration. No
experience, only curiosity required!
|
Wed, Mar 20, 2019 14:00-15:00 |
Level 6, Meeting Room | ||||||
Georgi
Karadzhov SiteGround |
Jupyter notebooks for efficient research
Coming from a Natural Language Processing engineer’s perspective, where
research is embedded in my job requirements, I have found Jupyter
Notebooks as a very useful tool for research. Like every other tool out
there, it can have its benefits as well as it can make things go very
wrong. I will start with a (very) brief introduction to Jupyter
Notebooks and basic workflow. Most importantly, I want to open a
discussion about how research can be done using Jupyter Notebooks. I
will discuss how they are used to facilitate rapid research and what are
the most common problems with using them. The second part of the talk
will be focused on two of the biggest issues in research code - code
quality and reproducibility. I will offer advice and processes that can
be adopted, in order to improve the quality of the code and ultimately
the reproducibility of the conducted experiments.
|
Mon, Mar 18, 2019 14:00-15:00 |
Level 5, Seminar Room | ||||||
Alba
Contreras Cuevas Universidad Complutense de Madrid |
Network analysis as an alternative to conceptualize mental disorders
[** NOTE: meeting takes place at 10am, a departure from the regular time **] In recent years, the network approach has been gaining popularity in the field of psychopathology as one of the alternatives to the traditional classification systems like the DSM. The network perspective conceptualises mental disorders as direct interactions between symptoms, rather than as a reflection of an underlying entity. The number of studies using this approach has grown rapidly in mental health. Thus, the aim of this talk is to introduce this new approach in psychopathology, know as network theory. The talk will summarize concepts and methodological issues associated to the study of mental problems from this novel approach. It will also show the current status of the network theory and modelling in psychopathology. The final objective is to exchange ideas regarding the theory methodology and limitations of Network Analysis. |
Wed, Mar 13, 2019 10:00-11:00 |
Level 5, Seminar Room | ||||||
Robin Ince and Guillaume Rousselet University of Glasgow |
Discussion of Sassenhagen & Draschkow (2018) paper on cluster based
permutation tests
A new paper by Jona Sassenhagen and Dejan Drashckow challenges common interpretations of results from cluster-based permutation tests of MEG/EEG data. Please read the paper and come ready to discuss. The discussion will be led by Robin Ince and Guillaume Rousselet. https://onlinelibrary.wiley.com/doi/10.1111/psyp.13335 |
Wed, Feb 20, 2019 14:00-15:00 |
Level 6, Meeting Room | ||||||
Daniël Lakens Eindhoven University of Technology |
Justify Everything (Or: Why All Norms You Rely on When Doing Research
Are Wrong) (slides)
Science is difficult. To do good science researchers need to know about
philosophy of science, learn how to develop theories, become experts in
experimental design, study measurement theory, understand the statistics
they require to analyze their data, and clearly communicate their
results. My personal goal is to become good enough in all these areas
such that I will be able to complete a single flawless experiment, just
before I retire – but I expect to fail. In the meantime, I often need to
rely on social norms when I make choices as I perform research. From the
way I phrase my research question, to how I determine the sample size
for a study, or my decision for a one or two-sided test, my
justifications are typically ‘this is how we do it’. If you ask me ‘why’
I often don’t know the answer. In this talk I will explain that,
regrettably, almost all the norms we rely on are wrong. I will provide
some suggestions for attempts to justify aspects of the research cycle
that I am somewhat knowledgeable in, mainly in the area of statistics
and experimental design. I will discuss the (im)possibility of
individually accumulating sufficient knowledge to actually be able to
justify all important decisions in the research you do, and make some
tentative predictions that in half a century most scientific disciplines
will have become massively more collaborative, with a stronger task
division between scholars working on joint projects.
|
Fri, Feb 08, 2019 15:30-16:30 |
Level 5, Seminar Room | ||||||
|
M&Ms Preparation for seminar by Daniel Lakens Equivalence Testing for Psychological Research, Justify Your Alpha, related blog post |
Wed, Feb 06, 2019 14:00-15:00 |
Level 6, Meeting Room | ||||||
Dale Barr University of Glasgow |
Tutorial on Iteration In R
If you’re copying and pasting R code, or getting stuck in loops, please come to this hands on tutorial on Iteration and learn how to use the purrr::map_* functions (like the *apply functions in base R, but better!) Some optional background reading: https://r4ds.had.co.nz/iteration.html There are workstations in the lab you can use, or you can bring your own laptop. In the latter case, please make sure to have tidyverse installed. |
Wed, Jan 16, 2019 14:00-15:00 |
Boyd Orr Building, Room 520 | ||||||
Dale Barr University of Glasgow |
Randomizing and automating assessment with the R exams package R exams, (slides)
I will give an overview of the open source “exams” package for R, which
assists in the generation and assessment of electronic and written
exams. I recently used this package for the first time to create a
written exam for my L3 statistics course. We validated the performance
of the automatic scanning of answers and found above 99% accuracy.
Although it requires substantial time to set up, over the long run, the
exams package makes creating and marking exams more efficient. More
information about the package can be found at http://www.r-exams.org.
|
Wed, Dec 12, 2018 13:00-14:00 |
Level 6, Meeting Room | ||||||
Jack Taylor University of Glasgow |
LexOPS: A Web App for Stimulus Generation in Psycholinguistics LexOPS github
A common problem in designing Psychology experiments that use word
stimuli is that they often require labour-intensive methods of stimulus
generation, to ensure that specific prelexical, lexical, and semantic
features are suitably controlled for. When this is done poorly,
confounding variables can obscure the interpretation of results. This
talk and demonstration introduce a new intuitive R Shiny App which
integrates a wide range of features from multiple sets of norms,
corpora, megastudies, and measures. The App allows the user to easily
generate stimuli, fetch characteristics of existing stimuli, and
identify suitably matched stimulus controls within desired constraints.
|
Wed, Dec 05, 2018 14:00-15:00 |
Level 5, Seminar Room | ||||||
Lawrence Barsalou University of Glasgow |
The Situated Assessment Method (SAM^2): A new approach to measuring,
understanding, and predicting health behaviours
Based on theories of situated cognition and embodiment in cognitive
science, the Situated Assessment Method (SAM^2) offers a theoretically
motivated approach for measuring health behaviours at both the group and
individual levels. Rather than attempting to capture a health behaviour
with general items that abstract over relevant situations (as in
standard self-report instruments), SAM^2 assesses two situated
dimensions. First, SAM^2 establishes the specific situations associated
with an individual’s behaviour in a health domain. Second, SAM^2 samples
features from the situated action cycle that attempt to predict the
behaviour across situations. As a result, SAM^2 establishes overall
measures of a health behaviour grounded directly in situational
experience, along with features of the situated action cycle that
predict the behaviour. In recent studies, we have found that SAM^2 does
an excellent job of predicting health behaviours associated with habits,
eating, stress, and trichotillomania (compulsive hair pulling). Using
mixed effects models, SAM^2 typically explains 60 to 80% of the variance
at the group level and more variance at the individual level,
demonstrating large systematic individual differences. In addition,
SAM^2 represents individual differences in an explicit manner that has
potential for supporting individuals as they understand and work with a
health behaviour. Issues associated with causality, explicit
vs. implicit measures, and external validity are important to address.
|
Wed, Nov 28, 2018 14:00-15:00 |
Level 6, Meeting Room | ||||||
Robin Ince University of Glasgow |
The Problem of Multiple Comparisons in Neuroimaging |
Wed, Nov 21, 2018 14:00-15:00 |
Level 6, Meeting Room | ||||||
Martin
Lages University of Glasgow |
Spot the celebrity lookalike!
We applied logistic mixed models to data from two identity
discrimination experiments. We investigated the effect of familiarity on
discrimination performance exploring subject-specific and item-specific
random effects with the lme4 and brms package in R.
|
Wed, Nov 14, 2018 14:00-15:00 |
Level 5, Seminar Room | ||||||
Jessica Flake McGill University |
The Fundamental Role of Construct Validity in Original and Replicated Research |
Wed, Nov 07, 2018 14:00-15:00 |
Level 5, Seminar Room | ||||||
Lisa DeBruine University of Glasgow |
Setting up your first Shiny app (tutorial)
*** NOTE TIME CHANGE: NOW TAKING PLACE AT 15:00! *** Shiny is an R package that makes it easy to build interactive web apps straight from R. In this tutorial, I’m going to walk you through the absolute basics of setting up a shiny app, starting with the example built into RStudio. I’m not going to explain yet how shiny apps are structured, the goal is to just get something up and running, and give you some familiarity with the layout of a fairly simple app. The tutorial will take place in a computer lab, so there is no need to bring a laptop, but if you do, please make sure you have installed R and RStudio before you arrive. You don’t need to have any other familiarity with R. |
Wed, Oct 31, 2018 15:00-16:00 |
TBA | ||||||
Ben Jones University of Glasgow |
Writing Registered Reports (slides)
Publication bias distorts the scientific record and can give a false
impression of the robustness of published results. Publication bias is
neutralised in Registered Reports, a new publication format in which the
rationale, methods, and analysis plan are peer reviewed prior to data
collection or analyses and an in principal acceptance can be given. This
short talk will introduce and highlight useful online resources for
writing Registered Reports. I will also discuss some of the
misconceptions about the format and discuss the key lessons I have
learnt from preparing my first Registered Reports.
|
Wed, Oct 24, 2018 14:00-15:00 |
Level 5, Seminar Room | ||||||
Dale Barr University of Glasgow |
Mixed-effects modeling of temporal autocorrelation: Keep it minimal
Linear models assume the independence of residuals. This assumption is
violated in datasets where there is temporal autocorrelation among the
residuals. Recently, Baayen, Vasishth, Kliegl, and Bates (2017)
presented evidence for autocorrelation in three psycholinguistic
datasets, and showed how Generalized Additive Mixed Models (GAMMs) can
be used to model these autocorrelation patterns. However, there is
currently little understanding of the impact of autocorrelation on model
performance, and the extent to which GAMMs improve (or impair)
inference. Through Monte Carlo simulation, we found that mixed-effects
models perform well in the face of autocorrelation, except when
autocorrelation is confounded with treatment variance. GAMMs did little
to improve power, and in fact, the use of factor smooths dramatically
impaired power for detecting effects of any between-subjects factors.
These results suggest GAMMs are only needed in special cases.
|
Wed, Oct 10, 2018 14:00-15:00 |
Level 5, Seminar Room | ||||||
Christoph
Schild Institute for Psychology, University of Copenhagen |
Introduction to the formr survey framework in R formr, (slides)
Formr is a free and open source survey framework which supports a broad range of study designs. Formr surveys have a responsive layout (i.e. they also work on phones and tablets) and the data can easily be shared via spreadsheets. Further, studies can be linked to the OSF. Because of the integration with R, formr allows researchers to use a familiar programming language to enable complex features. For more details and information please see https://formr.org |
Wed, Oct 03, 2018 15:00-16:00 |
Level 5, Seminar Room |