Methods & MetaScience is a journal club and discussion group organized by the School of Psychology & Neuroscience at the University of Glasgow. Meetings are every Wednesday from 12:00 to 13:00 UK time on Zoom. Coding Club meets every Wednesday from 16:05 to 17:00.

In addition to talks, we also run training events. If you would like to present or run a training workshop, please let Dale know!

Some of our events are open to the public and the zoom link will be posted here if so. Contact Dale for access to the Teams channel that has the recordings and zoom links for non-public events if you are University of Glasgow staff or student.

For a dynamically updated list of talks that you can subscribe to, please see the events list for M&Ms or Coding Club.

Series

See GLMM Resources for resources related to our series on generalised linear mixed effects models.

See Data Visualisation Resources for resources related to visualising data using graphs and tables.

m m m m m m m m m m m m

Upcoming Events

Speaker Title Date/Time Location
Elaine Jackson and Martin Lages
A Meta-Analysis on Juror Decisions
We are interested in whether the English verdict system (guilty/not guilty) and Scottish verdict system (guilty/not proven/not guilty) affects conviction rates as reported in ten studies with different crime types (e.g., murder, assault, rape) and a total of 1,986 jurors. The results of a logistic regression with mixed effects suggest a significant decrease of convictions for the Scottish compared to the English verdict system. We discuss some implications of this result.
Wed, Nov 02, 2022
12:00-13:00
Level 5, Seminar Room
Alistair Beith
University of Glasgow
Building Online Auditory Experiments with Accurate Reaction Times

How much trust are you willing to put in the audio equipment that a participant uses to take part in your online study? Many experiment building tools create studies that perform well in the lab but with no way of knowing how they perform in the wild. The solution to this problem is not just to improve the performance, but also to measure it.

This tutorial will present a selection of tools for building your own auditory experiments with jsPsych and measuring the timing accuracy of responses. With these tools you can present auditory stimuli with sample level precision and collect responses with well defined errors. No knowledge of jsPsych or participation is required.
Wed, Nov 09, 2022
12:00-13:00
Level 5, Seminar Room
James Bartlett
University of Glasgow
APA-style manuscripts with RMarkdown and papaja Wed, Nov 16, 2022
12:00-13:00
Level 5, Seminar Room
Christoph Daube
something something neuroimaging related (paper discussion) Wed, Nov 23, 2022
12:00-13:00
Level 5, Seminar Room
Juliane Kloidt
structural equation modelling Wed, Nov 30, 2022
12:00-13:00
Level 5, Seminar Room
Christoph Scheepers & Dale Barr
Nonconvergence and other delights of linear mixed-effects modelling Wed, Dec 07, 2022
12:00-13:00
Level 5, Seminar Room

m m m m m m m m m m m m

Past Events

Speaker Title Date/Time Location
Lisa DeBruine
University of Glasgow
Intro to Code Review

Research transparency and integrity benefit greatly from computationally reproducible code, and there is an increasing emphasis on learning the skills to code. However, there hasn’t been as much emphasis on learning the skills to check code. People cite a lack of time, expertise, and incentives as reasons that they don’t ask others to review their research code, but the most commonly cited reason was embarrassment for others to see their code. In this introductory session, I will cover the goals of code review, some key concepts, and present checklists for preparing your code to be reviewed and for reviewing others’ code. We will also do a mock review of my own code. The hope is that code check sessions can become a regular part of the M&Ms seminar series.

NOTE: This is an IN PERSON event (preferred) with a hybrid option. If you wish to attend remotely, please contact the organiser (Dale Barr) for the link.
Wed, Oct 19, 2022
12:00-13:00
Level 6, Meeting Room
Robin Ince
University of Glasgow
Version control for data [ONLINE ONLY EVENT]
Git is now widely used for tracking and managing changes to source code and text, but systematically tracking changes to data is less common. There are now a range of tools available to track data from acquisition, through analysis to results. This informal session will give a quick overview of tools I looked at to address this problem, including Git LFS, git-annex, DVC and Datalad. I briefly cover the features of each of these and why I chose DVC. I will then do a short demo of using DVC to track changes to data, and keep data in sync between collaborators or different computers. You can install DVC if you want to follow along (not required): https://dvc.org/doc/install Time permitting, we will talk a bit about DVC functionality around defining reproducible analysis pipelines.
Wed, Oct 12, 2022
12:00-13:00
Zoom
Dale Barr
University of Glasgow
Brainstorming Meeting

This will be a planning meeting for the 2022-3 Methods & Metascience (M&Ms) “brown bag” journal club. Do you have ideas for M&Ms training events, workshops, speakers, or papers to discuss? Are there particular topics you would like to see covered? Please come to our brainstorming meeting with your ideas and wishes for the coming year. This meeting will be IN PERSON in the Level 6 Meeting Room.

If you cannot attend, please send any thoughts you have to Dale. Feel free to bring your lunch!
Wed, Oct 05, 2022
12:00-13:00
Level 6, Meeting Room
Chris Hartgerink
Liberate Science
ResearchEquals.com - step by step publishing

Open science brings many changes, yet publishing remains the same. As a result, many improvements in the research and education process can’t fulfill their promises. In order to facilitate a rapidly changing research ecosystem, ResearchEquals allows researchers to publish whatever outputs their work creates, instead of working to create outputs that can be published. Building on open infrastructures, ResearchEquals allows you to publish over 20 different types of research modules, with more being added based on the needs from you. Example modules include theory, study materials, data, or software. However, other outputs, like presentations, figures, or educational resources can also be published. All of these research steps are linked together, to create a research journey - recognizing that educational materials, research design and analysis are all part of our learning journeys. In this session you will get an introduction to ResearchEquals and how to join to co-create the platform.

Chris Hartgerink (he/him/they) is the founder of Liberate Science, an organization repairing knowledge distribution. He was awarded his PhD in meta-research (Tilburg University; 2020) and now applies that knowledge to create practical systems for researchers to make better research easier.
Wed, Apr 20, 2022
16:00-17:00
Zoom
Phil McAleer
University of Glasgow
Tables tutorial

This week we focus on tables for the M&Ms DataViz series. Phil will lead us through the tutorial Tutorial Riding tables with {gt} and {gtExtras} by Benjamin Nowak.

The {gt} extension already allowed to easily create tables from raw dataset, but now the {gtExtras} extension adds many customization options. Here we will illustrate the possibilities of these packages with TidyTuesday dataset on Tour de France riders, extracted from Alastair Rushworth’s {tdf} extension.
Wed, Apr 06, 2022
16:00-17:00
Zoom
Kevin Guyan
Queer Data

[Affiliated talk from LGBTQIA+ Reading Group]

Data has never mattered more. Our lives are increasingly shaped by it and how it is defined, collected and used. But who counts in the collection, analysis and application of data? Kevin Guyan will discuss key themes from his new book Queer Data: Using Gender, Sex and Sexuality Data for Action including the relationship between data and visibility, the politics of who and how to count, and how data biases are used to delegitimise the everyday experiences of queer communities.

This is a hybrid event - register for online or in-person attendance via Eventbrite.

https://www.eventbrite.co.uk/e/reading-group-kevin-guyan-queer-data-tickets-256117744067
Wed, Mar 30, 2022
11:00-12:00
Zoom
Balazs Aczel & Marton Kovacs
ELTE University, Budapest
Developing tools and practices to promote open and efficient science

In this talk, I’ll introduce some new tools that aim to improve the efficiency of researchers’ work and the accumulation of knowledge. I’ll argue that minimizing extra workload and increasing the ease of use have key importance at the introduction of new research practices. The tools that I’ll share are: (1) The Transparency Checklist, a consensus-based general ShinyApp checklist to improve and document the transparency of research reports; (2) Tenzing, a solution to simplify the CRediT-based documentation and reporting the contributions to scholarly articles; (3) SampleSizePlanner, an app and R package that offer nine different procedures to determine and justify the sample size of a study design; and (4) the Multi-analyst guidance, a consensus-based guide for conducting and documenting multi-analyst studies.

  1. Transparency Checklist
  2. Tenzig
  3. SampleSizePlanner
  4. Consensus-based guidance for conducting and reporting multi-analyst studies
Wed, Mar 16, 2022
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
Discussion of “Fooled by beautiful data”
Continuing the M&Ms DataViz series with a discussion of “Fooled by beautiful data: Visualization aesthetics bias trust in science, news, and social media” (Lin & Thornton, 2021).
Wed, Mar 09, 2022
16:00-17:00
Zoom
James Bartlett
University of Glasgow
The Science of Visual Data Communication: What Works
Starting off the M&Ms DataViz series with a discussion of “The Science of Visual Data Communication: What Works” (Franconeri et al., 2021).
Wed, Feb 23, 2022
16:00-17:00
Zoom
Chris Hartgerink
Liberate Science
ResearchEquals.com - step by step publishing

Open science brings many changes, yet publishing remains the same. As a result, many improvements in the research and education process can’t fulfill their promises. In order to facilitate a rapidly changing research ecosystem, ResearchEquals allows researchers to publish whatever outputs their work creates, instead of working to create outputs that can be published. Building on open infrastructures, ResearchEquals allows you to publish over 20 different types of research modules, with more being added based on the needs from you. Example modules include theory, study materials, data, or software. However, other outputs, like presentations, figures, or educational resources can also be published. All of these research steps are linked together, to create a research journey - recognizing that educational materials, research design and analysis are all part of our learning journeys. In this session you will get an introduction to ResearchEquals and how to join to co-create the platform.

Chris Hartgerink (he/him/they) is the founder of Liberate Science, an organization repairing knowledge distribution. He was awarded his PhD in meta-research (Tilburg University; 2020) and now applies that knowledge to create practical systems for researchers to make better research easier.
Wed, Feb 16, 2022
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
PsyTeachR Glossary Hackathon

Learn about the open-source and crowdsourced PsyTeachR glossary and how to contribute. We’ll lead a hands-on session where you can help identify terms that need to be added or definitions that need more detail. You’ll learn how to file issues on github (but you can also participate without a github account).

Zoom: [Zoom link hidden. Please log in to view it, or contact the event organiser] Glossary: https://psyteachr.github.io/glossary/ Resources: https://docs.google.com/document/d/1FOohcEbWTAXB7OWSvMZzwMI3F--pNNRT3_3DSB6KeB8/edit#
Wed, Feb 09, 2022
16:00-17:00
Zoom
Robin Ince
University of Glasgow
Within-participant statistics, prevalence and reproducibility

Within neuroscience, psychology, and neuroimaging, the most frequently used statistical approach is null hypothesis significance testing (NHST) of the population mean. An alternative approach is to perform NHST within individual participants and then infer, from the proportion of participants showing an effect, the prevalence of that effect in the population. We propose a novel Bayesian method to estimate such population prevalence that offers several advantages over population mean NHST. This method provides a population-level inference that is currently missing from study designs with small participant numbers, such as in traditional psychophysics and in precision imaging. Bayesian prevalence delivers a quantitative population estimate with associated uncertainty instead of reducing an experiment to a binary inference. Bayesian prevalence is widely applicable to a broad range of studies in neuroscience, psychology, and neuroimaging. Its emphasis on detecting effects within individual participants can also help address replicability issues in these fields.

https://elifesciences.org/articles/62461
Wed, Feb 02, 2022
16:00-17:00
Zoom
Dan Quintana
University of Oslo, Biological Psychiatry
The increasing the credibility of meta-analysis

A very M&Ms-relevant talk hosted by the Workshop in Quantitative Methods:

Several scientific fields are experiencing a reproducibility crisis, in which hypothesis-driven studies are failing to replicate. Poor reproducibility has been linked to several factors, but some of the most pertinent issues are analytical flexibility and lack of transparency. While these issues are becoming more well-known for primary research, it has received less attention for meta-analysis. In this talk, Daniel will present remedies for analytical flexibility, such as pre-registration, and ways to increase the transparency of analyses. Together, reducing analytical flexibility and increasing transparency will improve the credibility of meta-analysis.
Wed, Jan 26, 2022
13:00-14:00
Zoom
Clara Weale
Introducing the opportunities and adventures at the Glasgow Library of Olfactory Material Wed, Jan 19, 2022
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Make it Pretty

We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form.

In this last session, we’ll finish the app and make it visually appealing and all your own style. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials.
Wed, Dec 15, 2021
14:00-15:00
Zoom
Thomas Varley
Indiana University
Intersectional synergies: untangling irreducible effects of intersecting identities via information decomposition

The idea of intersectionality has become a frequent topic of discussion both in academic sociology, as well as among popular movements for social justice such as Black Lives Matter, intersectional feminism, and LGBT rights. Intersectionality proposes that an individual’s experience of society has aspects that are irreducible to the sum of one’s various identities considered individually, but are “greater than the sum of their parts.” In this work, we show that the effects of intersectional identities can be statistically observed in empirical data using information theory. We show that, when considering the predictive relationship between various identities categories such as race, sex, and income (as a proxy for class) on outcomes such as health and wellness, robust statistical synergies appear. These synergies show that there are joint-effects of identities on outcomes that are irreducible to any identity considered individually and only appear when specific categories are considered together (for example, there is a large, synergistic effect of race and sex considered jointly on income irreducible to either race or sex). We then show using synthetic data that the current gold-standard method of assessing intersectionalities in data (linear regression with multiplicative interaction coefficients) fails to disambiguate between truly synergistic, greater-than-the-sum-of-their-parts interactions, and redundant interactions. We explore the significance of these two distinct types of interactions in the context of making inferences about intersectional relationships in data and the importance of being able to reliably differentiate the two. Finally, we conclude that information theory, as a model-free framework sensitive to nonlinearities and synergies in data, is a natural method by which to explore the space of higher-order social dynamics.

https://arxiv.org/abs/2106.10338
Wed, Dec 08, 2021
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club

We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form.

In this seventh session, we will: make sure the user interface has a clear design, add help messages, and clean user inputs. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials.
Wed, Dec 08, 2021
14:00-15:00
Zoom
M&Ms
Cancelled due to strike Wed, Dec 01, 2021
16:00-17:00
Coding Club
Cancelled due to strike Wed, Dec 01, 2021
14:00-15:00
Paper Discussion
University of Glasgow
Pushback against GLMMs

Paper discussion:

Gomila, R. (2021). Logistic or linear? Estimating causal effects of experimental treatments on binary outcomes using regression analysis. Journal of Experimental Psychology: General, 150(4), 700–709. https://doi.org/10.1037/xge0000920

Knief U, Forstmeier W. (2021). Violating the normality assumption may be the lesser of two evils. Behav Res Methods. https://doi.org/10.3758/s13428-021-01587-5

William H. Ryan, Ellen R. K. Evers, Don A. Moore; Poisson Regressions: A Little Fishy. Collabra: Psychology 4 January 2021; 7 (1): 27242. https://doi.org/10.1525/collabra.27242
Wed, Nov 24, 2021
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Input Interface

We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form.

In this fifth session, we will create an input tab so people can add their ratings of the books. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials.
Wed, Nov 24, 2021
14:00-15:00
Zoom
Christoph Scheepers
University of Glasgow
Effect size and power in (Generalized) Linear Mixed Models of response time data

In their oft-cited tutorial on power and effect size in mixed effects models, Brysbaert and Stevens (2018; henceforth B&S18) make some important observations about the power of typical RT experiments with counterbalanced repeated-measures designs. They show that the traditional (and by now largely outdated) approach of performing repeated-measures ANOVAs on subject- respectively item-aggregated data tends to grossly over-estimate effect sizes in experiments of this kind. Indeed, when the same data are analysed with more state-of-the-art Linear Mixed Models (LMMs), which require trial-level observations, typical RT effects come out very small by common effect size standards. Based on their simulations, B&S18 recommend that RT experiments with repeated measures at subject and item level should have at least 1,600 observations in each condition (e.g., 40 participants × 40 items per design cell) to ensure sufficient power for small effects.

However, B&S18 stop short of assessing effect size and power for more advanced alternatives to LMM which, in the context of analysing RT data, not only make more theoretical sense, but also bear the prospect of being more sensitive to cross-condition differences in RT. Indeed, Lo and Andrews (2015) raise a number of points against fitting LMMs to RT data, regardless of whether or not such data are transformed to make them more compliant with the normal distribution requirement in LMM. Instead, they recommend that RT data be modelled via Generalized Linear Mixed Models (GLMMs) of either the Inverse Gaussian or the Gamma model family to more appropriately account for the positive skew in RT distributions. Moreover, they recommend using the Identity link in such models to maintain the assumption of linear relations between predictor and outcome variables.

In this talk, I will present re-analyses of an ‘overpowered’ dataset (1,020 subjects × 420 items) that was also used in B&S18. I will show that LMMs on raw or transformed RTs produce much smaller estimates of population effect sizes (ds ~ 0.1 for a 16-millisecond priming effect) than Inverse Gaussian or Gamma GLMMs (ds ~ 0.4). This is mainly due to sub-optimal fits achieved by the former: Residuals account for more than 55% of the random variance in LMMs, but for practically 0% in GLMMs (in the latter, virtually all of the random variability in the data is explained by subject- and item-related random effects).

Implications for power- and meta-analysis will be discussed.
Wed, Nov 17, 2021
16:00-17:00
Zoom
Jack Taylor
University of Glasgow
Rating norms should be calculated from cumulative link mixed effect models
Studies which provide norms of Likert ratings typically report per-item summary statistics. Traditionally, these summary statistics comprise the mean and the standard deviation (SD) of the ratings, and the number of observations. Such summary statistics can preserve the rank order of items, but provide distorted estimates of the relative distances between items because of the ordinal nature of Likert ratings. Inter-item relations in such ordinal scales can be more appropriately modelled by cumulative-link mixed effects models (CLMMs). In a series of simulations, and with a reanalysis of an existing rating norms dataset, we show that CLMMs can be used to more accurately norm items, and can provide summary statistics analogous to the traditionally reported means and SDs, but which are disentangled from participants’ response biases. CLMMs can be applied to solve important statistical issues that exist for more traditional analyses of rating norms.
Wed, Nov 10, 2021
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Visualising Data

We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form.

In this fourth session, we’ll create an interactive plot (maybe a map?) that visualises the form data. See https://psyteachr.github.io/mms/coding_club.html for the schedule and materials.
Wed, Nov 10, 2021
14:00-15:00
Zoom
Dale Barr
University of Glasgow
A Gentle Introduction to Generalized Linear Mixed Models
Part of the GLMM series
Wed, Nov 03, 2021
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Filtering Data

We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form.

In this third session, we will filter the book table by genre, and create a dynamic filtering input for location. See https://psyteachr.github.io/shiny-tutorials/coding_club.html for the schedule and materials.
Wed, Nov 03, 2021
14:00-15:00
Zoom
Amanda Ptolomey
University of Glasgow
Encounters with zine making in participatory research

Zines are small DIY booklets or magazines, and making them requires no creative or specialist skills - just about anyone can make a zine. This seminar will explore zine-making as creative research method. Drawing on my work facilitating zine-making with children, disabled young women, and adults, I reflect on the power of zines in the research encounter as media for agency and resistance, and as canvasses for self-expression through and beyond words.

Bio Based in the Strathclyde Centre for Disability Research at the University of Glasgow, Mindy (Amanda) Ptolomey specialises in developing creative feminist methods to work with routinely excluded people. Alongside writing up her doctoral research developing zine-making as a creative feminist method with disabled young women, Mindy is currently contributing her expertise on zine-making two additional research projects - Back Chat: developing arts based methods of knowledge generation and exchange with children in times of crisis (with Helen Lomax and Kate Smith, University of Huddersfield) and Researchers Don’t Cry?! Locating, Articulating, Navigating and Doing Emotion in the Field’ (with Lisa Bradley and Nughmana Mirza, University of Glasgow). Mindy is also on the steering group for the UKRI Future Leaders funded study Following Young Fathers Further (PI Anna Tarrant, University of Lincoln) contributing her knowledge of zine-making and creative methods with children and young people. Mindy also facilitates young women focussed events including conferences, workshops, and film screenings as part of the feminist social science collective Girlhood Gang which she co-founded. Alongside her academic research Mindy has worked in community development and peacebuilding education for over a decade, designing and leading creative projects in Scotland and internationally.
Wed, Oct 20, 2021
16:00-17:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Reading Data

We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form.

In this second session, we’ll learn to read data from a Google Sheet. See https://psyteachr.github.io/shiny-tutorials/coding_club.html for the schedule and materials.
Wed, Oct 20, 2021
14:00-15:00
Zoom
Heather Cleland Woods, Phil McAleer, Lisa DeBruine, Kate Reid
University of Glasgow
An ounce of prevention is worth a pound of cure (slides)
This seminar will highlight a few things you can do during the planning stage of your project to help you develop healthy research habits, such as thinking about formulating suitable research questions (qual and quant), including positive and negative controls to test whether the collected data make sense, justifying sample size and other design decisions, or simulating data to create a concrete data cleaning and analysis plan. The talk is aimed at undergraduate and MSc students just starting their dissertations, but is applicable to anyone starting a qualitative or quantitative research project.
Wed, Oct 13, 2021
15:00-16:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Setup

We’ll be working through the book Building Web Apps with R Shiny to practice and extend the R skills you’re learning in methods and stats classes to make a custom web app for searching, visualising and updating data from a Google form.

In this first session, we’ll set up a demo shiny app and decide what kind of data our forms will collect. See https://psyteachr.github.io/shiny-tutorials/coding_club.html for the schedule and materials.
Wed, Oct 06, 2021
14:00-15:00
Zoom
Dale Barr
University of Glasgow
Methods & Metascience Organizational Meeting for 2021-2022
Organisational meeting. All staff/students are welcome to come and help plan activities of the methods centre for the coming year. Come and discuss training needs, possible speakers and events, outreach activities, and long-term plans.
Wed, Sep 29, 2021
14:00-15:00
Zoom
Alan Huebner
University of Notre Dame
Computational Tools and Applications for Generalizability Theory

Generalizability theory (G-theory) is a powerful, modern framework in which to conduct reliability analyses. The method allows the user to disentangle various sources of measurement error and find optimal measurement procedures. However, there has been a scarcity of user-friendly computer software resources for conducting basic and advanced G-theory analyses. This talk will give a brief overview of G-theory and current computational tools, discuss some applications to sports performance science, and demonstrate the use of the Gboot package in R for computing bootstrap confidence intervals for G-theory variance components and reliability coefficients:

Code on github: https://github.com/alanhuebner10/Gboot.git
Wed, Jun 16, 2021
14:00-15:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club
Code along and create a shiny app that simulates and plots data with user-input parameters. In this third session, we will restructure the simulation functions to keep a record of previously simulated data to make faceted plots for comparison. Materials to get caught up are at https://psyteachr.github.io/shiny-tutorials/coding_club.html
Wed, Jun 09, 2021
14:00-15:00
Zoom
Maureen Haaker
UKData Service/ University of Suffolk
(Re)Using Qualitative Data: Getting the Most Out of Data (slides)
Maureen Haaker (UKData Service): (Re)Using Qualitative Data: Getting the Most Out of Data
Wed, Jun 02, 2021
14:00-15:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club
Code along and create a shiny app that simulates and plots data with user-input parameters. In this second session, we will learn how to check the user input for errors, such as impossible values, and handle these without crashing. Materials to get caught up are at https://psyteachr.github.io/shiny-tutorials/coding_club.html
Wed, May 26, 2021
14:00-15:00
Zoom
Guillaume Rousellet
University of Glasgow
Benefits of learning and teaching bootstrap methods (slides)

In this session I’ll try to achieve two goals: first, to give a brief introduction to bootstrap methods, focusing on the percentile bootstrap; second, to illustrate how teaching the bootstrap is a great opportunity to introduce or consolidate key statistical concepts. Notably, building a bootstrap distribution requires discussions about sampling and the data acquisition process. A bootstrap distribution can then be used to compute important quantitites: standard error, bias and confidence intervals, which can be discussed using graphical representations of bootstrap distributions. Teaching bootstrap methods also leads to the introduction of robust estimation, because contrary to a popular misconception, bootstrap methods are not robust on their own. Teaching bootstrap methods is an effective way to introduce Monte-Carlo simulations, requiring very little code alteration. Finally, consideration of bootstrap distributions is a great stepping stone to Bayesian posterior distributions, by considering a full distribution of plausible population values as opposed to a few summary statistics.

https://journals.sagepub.com/doi/full/10.1177/2515245920911881
Wed, May 19, 2021
14:00-15:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club Wed, May 12, 2021
14:00-15:00
Zoom
Crystal Steltenpohl, James Montilla Doble, Dana Basnight-Brown
Society for the Improvement of Psychological Science
Global Engagement Task Force

The Society for the Improvement of Psychological Science (SIPS) Global Engagement Task Force Report outlines several suggestions, specifically around building partnerships with geographically diverse open science organizations; increasing SIPS presence at other, more local events; diversifying remote events; considering geographically diverse annual conference locations; improving membership and financial resources; and surveying open science practitioners from geographically diverse regions.

Three members of this task force will speak about the report and the broader issues relevant to open science and scientific organisations.

Wed, May 05, 2021
14:00-15:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club Shiny Quiz 4 Wed, Apr 28, 2021
14:00-15:00
Zoom
Brianna Dym
University of Colorado, Boulder
Learning CS through Transformative Works: A perspective on underrepresented groups learning to code

Computer science classrooms are notorious for driving out underrepresented groups like women, people of color, and LGBTQ+ people. Though not intentional nor malicious, there are many social elements that contribute to these chronic under-representations in computer science classrooms across all academic levels. By investigating alternative paths to broaden participation in computing, we interviewed folks from underrepresented groups in computing and found that many people thought computer science classes were too boring, too hard, and had specific stereotypes about what “counts” as computer science in the first place. Instead, many of our participants had learned their computational skills from working on fun projects as part of a fan community. This talk explores what some of those projects are, why they were motivating to our participants, and what it might look like to bring projects from fandom into the classroom.

Relevant paper: https://cmci.colorado.edu/~cafi5706/SIGCSE2021_ComputationalLabor.pdf
Wed, Apr 21, 2021
14:00-15:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club Shiny Quiz 3 Wed, Apr 14, 2021
14:00-15:00
Zoom
Martin Lages
University of Glasgow
A Hierarchical Signal Detection Model with Unequal Variance for Binary Data

Gaussian signal detection models with equal variance are typically used for detection and discrimination data whereas models with unequal variance rely on data with multiple response categories or multiple conditions. Here a hierarchical signal detection model with unequal variance is suggested that requires only binary responses from a sample of participants. Introducing plausible constraints on the sampling distributions for sensitivity and response criterion makes it possible to estimate signal variance at the population level. This model was applied to existing data from memory and reasoning tasks and the results suggest that parameters can be reliably estimated, allowing a direct comparison of signal detection models with equal- and unequal-variance.

https://psyarxiv.com/tbhdq/
Wed, Apr 07, 2021
14:00-15:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Shiny Quiz 2

This week, we will continue working on the Buzzfeed-style quiz app to add code that records choices and controls the question progression.

Prep for this meeting:

  • decide what your quiz theme and questions will be
  • find images for each question (Pixabay is a good site for open-access images)
  • set up a few questions with image buttons
  • Edit the info tab to include image citation information

You can access all materials at the coding club website.

Wed, Mar 31, 2021
14:00-15:00
Zoom
Charles Gray
Newcastle University
It’s good enough to fail at reproducible science (Charles Gray)

As scientists, it’s desirable to demonstrate to others (including our future selves) that our computational scientific work validates the claims we make in published research. Being able to reproduce the work is a key component of easily validating an analysis, as well as extending the work. Reproducibility is not as simple, however, as sharing the code and data that produced our analysis. We must consider how our data and algorithms are understood, variables defined, etc. Our code needs to be accessible and interoperable. There is much guidance about best practice, but this is often overwhelming for those of us who are not from formal computer science training. In this talk I share the story of how I attempted good enough practice in my doctoral thesis, and found even this was simply too much to achieve. Indeed, despite failing to achieve good enough scientific practice in any aspect of reproducibility, I discovered a delightful world of scientific workflows in R that make every day just that bit easier now. Failing at reproducible science in R is the best we can aim for; we will never achieve best practice in every aspect of reproducibility, and if we failed, then we tried. Through this process we discover useful tools to make the next time easier.

Bio: Dr Charles T. Gray is a postdoc with the evidence synthesis lab at Newcastle University in the UK. She is currently specialising in Bayesian network meta-analysis in Cochrane intervention reviews. She has three degrees: arts, music, and mathematics. None of these prepared her for writing a dissertation in interdisciplinary computational metascience doctoral thesis in R.
Wed, Mar 24, 2021
14:00-15:00
Zoom
Lisa DeBruine
University of Glasgow
Coding Club: Shiny Quiz
Join us to learn more about coding with fun tasks. This week, we’ll be working on a shiny app for collecting quiz data. Download the template at https://github.com/debruine/shiny_template to code along, or just join and watch. The full code for the app will be available after the session.
Wed, Mar 17, 2021
14:00-15:00
Zoom
Dale Barr
University of Glasgow
Time-varying trends in repeated measures data: Model, control, or ignore?

Speaker: Dale Barr

Abstract: Almost all experiments in psychology and neuroscience involve repeated measurements taken over time. Very often, measurements from individual subjects show temporal variation due to such factors as practice, fatigue, mind wandering, or environmental distraction. However, in our statistical analyses we generally ignore this temporal variation. Under what circumstances is this a problem? And what are the potential benefits of modeling trial-by-trial fluctuations in performance? I will discuss pros and cons of various options for time-series modeling, including non-linear mixed-effects models, polynomial regression, and generalized additive mixed models (GAMMs). I will also discuss the potential for boosting power through experimental design, including results from a new project that is attempting to improve upon conventional randomization strategies.
Wed, Mar 10, 2021
14:00-15:00
Zoom
Robin Ince
University of Glasgow
Discussion: Edgington & Onghena, Randomization Tests
For more details, including how to access the readings, see event announcement on Microsoft Teams
Fri, Oct 16, 2020
15:30-16:30
Zoom
Jack Taylor
University of Glasgow
Bayesian Estimation of Hierarchical Models with Distributional Parameters (slides)

The recent M&Ms have led to an interesting debate on sample size and modelling variability. You might know that linear mixed effects models let you look at subject/item variability with random intercepts and slopes, but what if you want to model more than just the mean? What if you want to know how subjects/items differ in variance? Have no fear, Bayesian estimation is here! With Bayesian estimation you can build a single model that estimates the effect of your predictors on any parameters of whatever distribution your data takes. Throw in some random effects and hey presto! You’ve got a mixed effects model with distributional parameters. I’ll show an example of a model estimating the same maximal random effects structure for multiple distributional parameters of a non-Gaussian (shifted log-normal) distribution, which lets us look at subject/item variability of the intercepts and slopes of each parameter.

This paper shows some advantages of such models in accurately estimating effects and subject variability, and makes some interesting links to the large/small N debate:

Rouder, J. N., Lu, J., Speckman, P., Sun, D., and Jiang, Y. (2005). A hierarchical model for estimating response time distributions. Psychonomic Bulletin and Review, 12(2), 195-223. http://doi.org/10.3758/BF03257252
Fri, Jul 03, 2020
15:30-16:30
Zoom
Casper Hyllested
University of Glasgow
Generalizability Theory: Understanding variance across multiple facets in research (slides)

Several fields in social sciences are facing a replication crisis. This can be partially attributed to sub-optimal experimental designs, yet it is in large part also due to the inherent differences between people and populations. When an experiment is done, and disproportionate consideration is given to how well results can translate between different occasions, raters, tests or any number of potential facets, the likelihood that the findings are reliable decreases greatly. One way to quantify this is through Generalizability Theory (GT), which analyzes how much variance in results can be attributed to any chosen facets of generalization through considering both main and interaction effects. Recently it has been used in repeated measures experiments to determine whether tests, subscales or even individual items measure states or traits in individuals through a signal-noise ratio calculation named the Trait Component Index (TCI).

In this talk I will first go through the basic precepts and theories underlying GT, and then focus primarily on the breath of application to research in psychology. I will demonstrate this through a brief range of results in previous studies and will then cover the results from my own dissertation in greater detail. Finally, and perhaps most importantly I will outline some of the current limitations of GT and consider alternative methods in an effort to avoid replacing one default analysis with another, but instead highlight the advantages of a broader methodological repertoire in research.

Some optional background reading Casper has suggested:

Bloch, R., & Norman, G. R. (2012). Generalizability theory for the perplexed: A practical introduction and guide: AMEE Guide No. 68. Medical Teacher, 34(11), 960-992.

Medvedev, O. N., Krägeloh, C. U., Narayanan, A., & Siegert, R. J. (2017). Measuring Mindfulness: Applying Generalizability Theory to Distinguish between State and Trait. Mindfulness, 8(4), 1036 - 1046.

Steyer, R., Mayer, A., Geiser, C., & Cole, D. A. (2015). A Theory of States and Traits-Revised. Annual Review of Clinical Psychology, 11(1), 71-94.

Casper’s dissertation: https://smallpdf.com/shared#st=15d7fde3-6d68-46b2-af62-f18f7016bb1a&fn=2269064h+final+dissertation+draft-konverteret.pdf&ct=1592549319184&tl=word&rf=link
Fri, Jun 26, 2020
15:30-16:30
Zoom
Mine Çetinkaya-Rundel
University of Edinburgh, Duke University, and RStudio
The art and science of teaching data science
Modern statistics is fundamentally a computational discipline, but too often this fact is not reflected in our statistics curricula. With the rise of data science it has become increasingly clear that students want, expect, and need explicit training in this area of the discipline. Additionally, recent curricular guidelines clearly state that working with data requires extensive computing skills and that statistics students should be fluent in accessing, manipulating, analyzing, and modeling with professional statistical analysis software. In this talk, we introduce the design philosophy behind an introductory data science course, discuss in progress and future research on student learning as well as new directions in assessment and tooling as we scale up the course.
Thu, Jun 18, 2020
12:00-13:00
Zoom

Discussion of Smith & Little (2018), “In defense of the small N design”

Following up on Robin Ince’s suggestion, we will meet to discuss the following paper:

Smith, P.L., Little, D.R. Small is beautiful: In defense of the small-N design. Psychon Bull Rev 25, 2083–2101 (2018). https://doi.org/10.3758/s13423-018-1451-8

There will be a brief summary of the paper followed by commentary and group discussion.
Fri, Jun 12, 2020
15:30-16:30
Zoom
Robin Ince
University of Glasgow
Reconsidering population inference from a prevalence perspective

Within neuroscience, psychology and neuroimaging, it is typical to run an experiment on a sample of participants and then apply statistical tools to quantify and infer an effect of the experimental manipulation in the population from which the sample was drawn. Whereas the current focus is on average effects (i.e. the population mean, assuming a normal distribution [1]), it is equally valid to ask the alternative question of how typical is the effect in the population[2]? That is, we infer an effect in each individual participant in the sample, and from that infer the prevalence of the effect in the population[3–6]. We propose a novel Bayesian method to estimate such population prevalence, based on within-participant null-hypothesis significance testing (NHST). Applying Bayesian population prevalence estimation in studies sufficiently powered for NHST within individual participants could address many of the issues recently raised regarding replicability[7]. Bayesian prevalence provides a population level inference currently missing for designs with small numbers of participants, such as traditional psychophysics or animal electrophysiology[8,9]. Since Bayesian prevalence delivers a quantitative estimate with associated uncertainty, it avoids reducing an entire experiment to a binary inference on a population mean[10].

  1.     Holmes, A. & Friston, K. Generalisability, random effects and population inference. Neuroimage 7, (1998).
  2.     Friston, K. J., Holmes, A. P. & Worsley, K. J. How Many Subjects Constitute a Study? NeuroImage 10, 1–5 (1999).
  3.     Friston, K. J., Holmes, A. P., Price, C. J., Büchel, C. & Worsley, K. J. Multisubject fMRI Studies and Conjunction Analyses. NeuroImage 10, 385–396 (1999).
  4.     Rosenblatt, J. D., Vink, M. & Benjamini, Y. Revisiting multi-subject random effects in fMRI: Advocating prevalence estimation. NeuroImage 84, 113–121 (2014).
  5.     Allefeld, C., Görgen, K. & Haynes, J.-D. Valid population inference for information-based imaging: From the second-level t-test to prevalence inference. NeuroImage 141, 378–392 (2016).
  6.     Donhauser, P. W., Florin, E. & Baillet, S. Imaging of neural oscillations with embedded inferential and group prevalence statistics. PLOS Computational Biology 14, e1005990 (2018).
  7.     Benjamin, D. J. et al. Redefine statistical significance. Nature Human Behaviour 2, 6 (2018).
  8.     Neuroscience, S. for. Consideration of Sample Size in Neuroscience Studies. J. Neurosci. 40, 4076–4077 (2020).
  9.     Smith, P. L. & Little, D. R. Small is beautiful: In defense of the small-N design. Psychon Bull Rev 25, 2083–2101 (2018).
  10.   McShane, B. B., Gal, D., Gelman, A., Robert, C. &amp; Tackett, J. L. Abandon Statistical Significance. The American Statistician 73, 235–245 (2019)</div> </td>
Fri, Jun 05, 2020
15:30-16:30
Zoom
Group Discussion
Variability in the analysis of a single neuroimaging dataset by many teams
Please join us to discuss the following paper:

Botvinik-Nezer, R., Holzmeister, F., Camerer, C.F. et al. Variability in the analysis of a single neuroimaging dataset by many teams. Nature (2020). https://doi.org/10.1038/s41586-020-2314-9

Abstract: “Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed.”

Katarina Moravkova will provide a summary of the paper at the start of the session.
Fri, May 29, 2020
15:30-16:30
Zoom

Discussion of Yarkoni, “The Generalizability Crisis”

In this meeting we will discuss Yarkoni’s paper “The Generalizability Crisis”, along with a critique by Lakens and Yarkoni’s response. The format will be a general overview followed by discussion in small randomly allocated breakout rooms.

The meetings will be held via Zoom. See the M&Ms Teams channel for a link.

Pre-print by Tal Yarkoni: https://psyarxiv.com/jqw35

Review by Daniel Lakens: http://daniellakens.blogspot.com/2020/01/review-of-generalizability-crisis-by.html

Rebuttal by Yarkoni: https://www.talyarkoni.org/blog/2020/05/06/induction-is-not-optional-if-youre-using-inferential-statistics-reply-to-lakens/
Fri, May 15, 2020
15:30-16:30
Zoom
Group Discussion
Gelfert (2016), “How to Do Science with Models”, Chapters 1 to 3 (pp. 1-70)

For this Zoom meeting, the host will provide a quick summary of Gelfert (2016), “How to Do Science with Models”, Chapters 1 to 3 (pp. 1-70). This will be followed by small group discussion in randomly allocated breakout rooms.

To find out about accessing Gelfert (2016) and to get the Zoom link please see the M&Ms channel on Microsoft Teams or reply to the announcement email.
Fri, May 01, 2020
15:30-16:30
Zoom
Jessie Sun
University of California at Davis
Eavesdropping On Everyday Life

This talk considers the unique insights that can be gained by combining multiple methods for studying daily life. In the Personality and Interpersonal Roles Study (PAIRS), 300 participants completed experience sampling method (ESM) self-reports while wearing the Electronically Activated Recorder (EAR), an unobtrusive audio recording device, for one week. Over the past five years, nearly 300 research assistants have transcribed and coded participants’ behaviors and environments from over 300,000 EAR audio files. To provide two examples of questions that can only be answered by capturing actual behavior alongside ESM self-reports, I will describe two projects that have resulted from this dataset: 1) Do people have self-knowledge of their momentary personality states, and 2) What are people doing when they miss experience sampling reports? I will conclude by discussing the opportunities and challenges of implementing open practices with this highly-identifiable and repeatedly-used dataset.

Fri, Feb 28, 2020
12:00-13:00
Level 5, Seminar Room
Ruud Hortensius
University of Glasgow
Tools for reproducible fMRI analysis

On our way to transparent and reproducible neuroimaging we need to consider data and code in combination with the publication. In this talk, I will introduce a series of tools that the field has developed (BIDS, Heudiconv, MRIQC, MRIQCeption, fMRIprep, OpenNeuro, NeuroVault) that will not only help to achieve this goal of fully reproducible neuroimaging but also make a neuroimagers life easier.

Note: while the tools are focussed on neuroimaging the principle holds for behavioural, eye-tracking and physiological measures (e.g., BIDS is applicable to these measures).
Wed, Feb 19, 2020
14:00-15:00
Level 5, Seminar Room
Lisa DeBruine
University of Glasgow
R Workshop, part 2: Understanding Mixed-Effects Models through Data Simulation (materials) Wed, Feb 05, 2020
14:00-15:00
Level 5, Seminar Room
Lisa DeBruine
University of Glasgow
R Workshop, part 1: Introduction to Data Simulation (materials) Tue, Jan 28, 2020
13:00-14:00
Level 5, Seminar Room
Zoltan Dienes
University of Sussex
How to obtain evidence for the null hypothesis (paper)
To get evidence for or against one’s theory relative to the null hypothesis, one needs to know what it predicts. The amount of evidence can then be quantified by a Bayes factor. It is only when one has reasons for specifying a scale of effect that the level of evidence can be specified for no effect (that is, non-significance is not a reason for saying there is no effect). In almost all papers I read people declare absence of an effect while having no rational grounds for doing so. So we need to specify what scale of effect our theory predicts. Specifying what one’s theory predicts may not come naturally, but I show some ways of thinking about the problem. I think our science will be better for it!
Wed, Jan 22, 2020
14:00-15:00
Level 5, Seminar Room
Guillaume Rousselet
University of Glasgow
Why interactions are difficult to interpret and many are simply uninterpretable (slides)

References: [1] On interpretation of interactions https://link.springer.com/content/pdf/10.3758/BF03197461.pdf

[2] On the interpretation of removable interactions: A survey of the field 33 years after Loftus https://link.springer.com/article/10.3758/s13421-011-0158-0

[3] Interactions in logistic regression models https://janhove.github.io/analysis/2019/08/07/interactions-logistic
Wed, Jan 15, 2020
14:00-15:00
Level 5, Seminar Room

Is There a Generalizability Crisis in Psychology and Neuroscience?
For this meeting we will discuss a controversial new paper by Tal Yarkoni, “The Generalizability Crisis,” available at https://psyarxiv.com/jqw35.
Wed, Jan 08, 2020
14:00-15:00
Level 5, Seminar Room
Dale Barr
University of Glasgow
Containerize your code: Creating Reproducible Software Environments (slides) Wed, Nov 20, 2019
14:00-15:00
Level 5, Seminar Room
Andrew Burns
Urban Studies, University of Glasgow
Qualitative data – reaching the parts that statistics can’t?
An interactive seminar using data collected in an ethnography of homelessness (using participant observation, observation, and qualitative [walking] interviews). While originally used for an anthropological analysis, we can use this seminar to explore the place of such data in psychology (is there a place?) including the methods of data collection, different approaches to coding/analysis, writing up, and the reflexive researcher.
Tue, Oct 29, 2019
12:00-13:00
Level 5, Seminar Room
Dr. Jo Neary & Kathryn Machray
University of Glasgow
Take a picture, it’ll last longer: Practical and ethical considerations on using visual methods in qualitative research

For many people, taking and sharing photographs is an everyday behaviour, and is a way of sharing parts of your life (and your identity) with friends and family. This session discusses how research can utilise this everyday practice in a research setting, in order to shine a light on the elements of a participants’ life that may be inaccessible to traditional methods (such as surveys). In doing so, we explore the links between visual methods and ethnography, the use of visual methods in readdressing the power imbalance inherent in research, and some of the practical and ethical considerations of the method.

This session will include two case studies from our research (children’s experience of neighbourhood demolition; men’s experience of food poverty), as well as participation from the audience.
Tue, Oct 22, 2019
14:00-15:00
Level 5, Seminar Room
Anne Scheel
Eindhoven University of Technology
Is Hypothesis Testing Overused in Psychology?

A central goal of many of the open-science reforms proposed in reaction to the replication crisis is to reduce false-positive results in the literature. Very often, they assume that research consists of confirmatory hypothesis tests and that ‘questionable research practices’ are ways in which researchers cut corners to present evidence in favour of hypotheses that may in fact be false. Two increasingly popular methods to prevent this from happening are preregistration and Registered Reports: Both require that authors state their hypotheses and analysis plan before conducting their study, which is supposed to prevent twisting the data to fit the narrative (e.g. p-hacking) or twisting the narrative to fit the data (hypothesising after results are known). In theory, this practice safeguards the validity of inferences drawn from hypothesis tests by removing any ‘wiggle room’ authors could exploit to produce spurious positive results. In practice, many psychologists seem to struggle to fit their research into this revised, now much more narrow framework of confirmatory hypothesis testing: Preregistration has been accused of stifling creativity, is described as difficult even by its proponents, and analyses of published preregistrations show that most do not sufficiently restrict the above-mentioned wiggle room.

I want to argue that by making the very strict requirements of a confirmatory hypothesis test so explicit, preregistration and Registered Reports reveal that hypothesis testing may be the wrong tool for a substantial number of research questions in psychology. The conflation of exploratory and confirmatory research that psychologists have been used to may have stifled the development of a framework for high-quality exploratory research, which is the necessary basis for developing hypotheses in the first place. As such, resistance against preregistration and some of the growing pains the format is experiencing may simply be a consequence of it laying bare the misfit between research goals and the excessive focus on hypothesis testing in psychology. If this is true, psychologists may be well advised to shift this focus and work towards better literacy in the exploratory ground work that precedes confirmatory hypothesis tests.
Fri, Oct 18, 2019
12:00-13:00
Level 5, Seminar Room
Benedict Jones
University of Glasgow
Introducing p curve (slides)
P-curve is an alternative to traditional forms of meta-analysis that, in principle, allows you tell whether or not a published literature contains evidentiary value. This short session will (1) introduce the method by using a preregistered case study from the face perception literature and (2) discuss the advantages and disadvantages of the p-curve method.
Wed, Oct 09, 2019
14:00-15:00
Level 5, Seminar Room
Carolyn Saund
University of Glasgow
Crowdsourcing: The Good, The Bad, and The Ugly
An introduction to crowdsourcing methods for online surveys and experiments. First, a brief introduction to crowdsourced data for the skeptics: history, ethics, stats on data reliability, designing tasks, what should and should not be done via the magic of the internet. Rife with warnings, caveats, and cautionary tales. Secondly, a hands-on, step-by-step introductory demo of designing and deploying tasks for online workers. Not a workshop, just a demo, so no need to bring laptops. Will your fears be assuaged, or validated? You’ll have to come to find out!
Wed, Oct 02, 2019
14:00-15:00
Level 5, Seminar Room
Jack Taylor and Lisa DeBruine
University of Glasgow
R Shiny Afternoon

Shiny is an R package that makes it easy to create interactive apps in R that can be deployed to the web (find out more here: https://shiny.rstudio.com/). Shiny apps are great for teaching concepts, presenting research results, and automating simple tasks.

This four-hour workshop (1pm to 5pm) will cover:
- A basic introduction to Shiny
- How to collect data from participants in Shiny (e.g. questionnaires)
- How to build user-friendly apps for solving specific problems

Please bring your own laptop, and come having already installed R, RStudio, and the Shiny package.
Tue, Sep 24, 2019
13:00-14:00
Level 5, Seminar Room

Mixed effects models Q&A
If you have questions about mixed-effects modeling, here is an opportunity to come and ask! Dale and Christoph will be there to field your questions.
Wed, Jun 12, 2019
14:00-15:00
Level 5, Seminar Room
Maria Gardani & Satu Baylan
University of Glasgow
Systematic reviews and meta analysis Wed, May 01, 2019
14:00-15:00
Level 5, Seminar Room
Martin Lages
University of Glasgow
Variance Constraints for Hierarchical Signal Detection Models
Bayesian models typically place uninformative or weakly informative priors on parameters. Using a well-known data set on inductive and deductive reasoning, it is illustrated how incorporating variance constraints can help to estimate critical parameters and compare signal detection models with equal and unequal variance.
Tue, Apr 23, 2019
14:00-15:00
Level 5, Seminar Room
Robin Ince
University of Glasgow
A gentle introduction to the Partial Information Decomposition

In many cases in neuroimaging or data analysis we evaluate a statistical dependence in many variables, and find effects in more than one. For example, we might find an effect between a presented stimulus and neural responses in different spatial regions, time periods or frequency bands. A natural question in such situations is to what extent is the statistical relationship in the two responses common, or overlapping, and to what extent is there a unique effect in one response that is not related to the other response. An information theoretic framework called the Partial Information Decomposition (PID) has been proposed to address these questions.

The first part of the session will be a gentle introduction to information theoretic quantification of statistical interactions, introducing co-information, redundancy, synergy and the basic theory of the PID, as well as introducing some applications (including, interactions between neural responses, interactions between multi-modal stimulus features in speech, interactions between neural responses and behaviour and predictive model comparison).

The second part of the session will go into a bit more detail into the details of the implementation of the PID including the theory and computation of the Iccs redundancy measure, and more discussion of issues such a misinformation (negative unique information), applications etc. There will be a break between the two parts to give people the chance to opt-out of the more technical second part.
Wed, Apr 10, 2019
14:00-15:00
Level 5, Seminar Room
Anna Henschel
University of Glasgow
Adventures with rtweet and tidytext
We have always suspected it: Twitter and #rstats go hand in hand. In this tutorial we will do some web-scraping with the rtweet package and look at various fun ways to analyse this data taking advantage of the tidytext package. Join this tutorial if, for example, you want to learn how to make a colourful word cloud out of everything you ever tweeted or if you want to plot what people have been saying about the Great British Bake Off over time.
Wed, Apr 03, 2019
14:00-15:00
CCNI Analysis Suite
Lisa DeBruine
University of Glasgow
Simple packages and unit tests in R Wed, Mar 27, 2019
14:00-15:00
CCNI Analysis Suite
Carolyn Saund
University of Glasgow
Version Control and Beyond: How Git Will Change Your Life
Commits? Repositories? Branches? Git has a wide vocabulary that isn’t always intuitive, but that’s no reason to be scared away. From understanding branches and version control, to collaborative repositories and undoing history, this is an introduction to concepts behind git and Github. Open source software is the future of data science and scientific analyses, so now is the chance to gain a basic understanding of the magic behind remote and online collaboration. No experience, only curiosity required!
Wed, Mar 20, 2019
14:00-15:00
Level 6, Meeting Room
Georgi Karadzhov
SiteGround
Jupyter notebooks for efficient research
Coming from a Natural Language Processing engineer’s perspective, where research is embedded in my job requirements, I have found Jupyter Notebooks as a very useful tool for research. Like every other tool out there, it can have its benefits as well as it can make things go very wrong. I will start with a (very) brief introduction to Jupyter Notebooks and basic workflow. Most importantly, I want to open a discussion about how research can be done using Jupyter Notebooks. I will discuss how they are used to facilitate rapid research and what are the most common problems with using them. The second part of the talk will be focused on two of the biggest issues in research code - code quality and reproducibility. I will offer advice and processes that can be adopted, in order to improve the quality of the code and ultimately the reproducibility of the conducted experiments.
Mon, Mar 18, 2019
14:00-15:00
Level 5, Seminar Room
Alba Contreras Cuevas
Universidad Complutense de Madrid
Network analysis as an alternative to conceptualize mental disorders

[** NOTE: meeting takes place at 10am, a departure from the regular time **]

In recent years, the network approach has been gaining popularity in the field of psychopathology as one of the alternatives to the traditional classification systems like the DSM. The network perspective conceptualises mental disorders as direct interactions between symptoms, rather than as a reflection of an underlying entity. The number of studies using this approach has grown rapidly in mental health. Thus, the aim of this talk is to introduce this new approach in psychopathology, know as network theory. The talk will summarize concepts and methodological issues associated to the study of mental problems from this novel approach. It will also show the current status of the network theory and modelling in psychopathology. The final objective is to exchange ideas regarding the theory methodology and limitations of Network Analysis.
Wed, Mar 13, 2019
10:00-11:00
Level 5, Seminar Room
Robin Ince and Guillaume Rousselet
University of Glasgow
Discussion of Sassenhagen & Draschkow (2018) paper on cluster based permutation tests

A new paper by Jona Sassenhagen and Dejan Drashckow challenges common interpretations of results from cluster-based permutation tests of MEG/EEG data. Please read the paper and come ready to discuss. The discussion will be led by Robin Ince and Guillaume Rousselet.

https://onlinelibrary.wiley.com/doi/10.1111/psyp.13335
Wed, Feb 20, 2019
14:00-15:00
Level 6, Meeting Room
Daniël Lakens
Eindhoven University of Technology
Justify Everything (Or: Why All Norms You Rely on When Doing Research Are Wrong) (slides)
Science is difficult. To do good science researchers need to know about philosophy of science, learn how to develop theories, become experts in experimental design, study measurement theory, understand the statistics they require to analyze their data, and clearly communicate their results. My personal goal is to become good enough in all these areas such that I will be able to complete a single flawless experiment, just before I retire – but I expect to fail. In the meantime, I often need to rely on social norms when I make choices as I perform research. From the way I phrase my research question, to how I determine the sample size for a study, or my decision for a one or two-sided test, my justifications are typically ‘this is how we do it’. If you ask me ‘why’ I often don’t know the answer. In this talk I will explain that, regrettably, almost all the norms we rely on are wrong. I will provide some suggestions for attempts to justify aspects of the research cycle that I am somewhat knowledgeable in, mainly in the area of statistics and experimental design. I will discuss the (im)possibility of individually accumulating sufficient knowledge to actually be able to justify all important decisions in the research you do, and make some tentative predictions that in half a century most scientific disciplines will have become massively more collaborative, with a stronger task division between scholars working on joint projects.
Fri, Feb 08, 2019
15:30-16:30
Level 5, Seminar Room

M&Ms Preparation for seminar by Daniel Lakens Equivalence Testing for Psychological Research, Justify Your Alpha, related blog post Wed, Feb 06, 2019
14:00-15:00
Level 6, Meeting Room
Dale Barr
University of Glasgow
Tutorial on Iteration In R

If you’re copying and pasting R code, or getting stuck in loops, please come to this hands on tutorial on Iteration and learn how to use the purrr::map_* functions (like the *apply functions in base R, but better!)

Some optional background reading: https://r4ds.had.co.nz/iteration.html

There are workstations in the lab you can use, or you can bring your own laptop. In the latter case, please make sure to have tidyverse installed.
Wed, Jan 16, 2019
14:00-15:00
Boyd Orr Building, Room 520
Dale Barr
University of Glasgow
Randomizing and automating assessment with the R exams package R exams, (slides)
I will give an overview of the open source “exams” package for R, which assists in the generation and assessment of electronic and written exams. I recently used this package for the first time to create a written exam for my L3 statistics course. We validated the performance of the automatic scanning of answers and found above 99% accuracy. Although it requires substantial time to set up, over the long run, the exams package makes creating and marking exams more efficient. More information about the package can be found at http://www.r-exams.org.
Wed, Dec 12, 2018
13:00-14:00
Level 6, Meeting Room
Jack Taylor
University of Glasgow
LexOPS: A Web App for Stimulus Generation in Psycholinguistics LexOPS github
A common problem in designing Psychology experiments that use word stimuli is that they often require labour-intensive methods of stimulus generation, to ensure that specific prelexical, lexical, and semantic features are suitably controlled for. When this is done poorly, confounding variables can obscure the interpretation of results. This talk and demonstration introduce a new intuitive R Shiny App which integrates a wide range of features from multiple sets of norms, corpora, megastudies, and measures. The App allows the user to easily generate stimuli, fetch characteristics of existing stimuli, and identify suitably matched stimulus controls within desired constraints.
Wed, Dec 05, 2018
14:00-15:00
Level 5, Seminar Room
Lawrence Barsalou
University of Glasgow
The Situated Assessment Method (SAM^2): A new approach to measuring, understanding, and predicting health behaviours
Based on theories of situated cognition and embodiment in cognitive science, the Situated Assessment Method (SAM^2) offers a theoretically motivated approach for measuring health behaviours at both the group and individual levels. Rather than attempting to capture a health behaviour with general items that abstract over relevant situations (as in standard self-report instruments), SAM^2 assesses two situated dimensions. First, SAM^2 establishes the specific situations associated with an individual’s behaviour in a health domain. Second, SAM^2 samples features from the situated action cycle that attempt to predict the behaviour across situations. As a result, SAM^2 establishes overall measures of a health behaviour grounded directly in situational experience, along with features of the situated action cycle that predict the behaviour. In recent studies, we have found that SAM^2 does an excellent job of predicting health behaviours associated with habits, eating, stress, and trichotillomania (compulsive hair pulling). Using mixed effects models, SAM^2 typically explains 60 to 80% of the variance at the group level and more variance at the individual level, demonstrating large systematic individual differences. In addition, SAM^2 represents individual differences in an explicit manner that has potential for supporting individuals as they understand and work with a health behaviour. Issues associated with causality, explicit vs. implicit measures, and external validity are important to address.
Wed, Nov 28, 2018
14:00-15:00
Level 6, Meeting Room
Robin Ince
University of Glasgow
The Problem of Multiple Comparisons in Neuroimaging Wed, Nov 21, 2018
14:00-15:00
Level 6, Meeting Room
Martin Lages
University of Glasgow
Spot the celebrity lookalike!
We applied logistic mixed models to data from two identity discrimination experiments. We investigated the effect of familiarity on discrimination performance exploring subject-specific and item-specific random effects with the lme4 and brms package in R.
Wed, Nov 14, 2018
14:00-15:00
Level 5, Seminar Room
Jessica Flake
McGill University
The Fundamental Role of Construct Validity in Original and Replicated Research Wed, Nov 07, 2018
14:00-15:00
Level 5, Seminar Room
Lisa DeBruine
University of Glasgow
Setting up your first Shiny app (tutorial)

*** NOTE TIME CHANGE: NOW TAKING PLACE AT 15:00! ***

Shiny is an R package that makes it easy to build interactive web apps straight from R. In this tutorial, I’m going to walk you through the absolute basics of setting up a shiny app, starting with the example built into RStudio. I’m not going to explain yet how shiny apps are structured, the goal is to just get something up and running, and give you some familiarity with the layout of a fairly simple app. The tutorial will take place in a computer lab, so there is no need to bring a laptop, but if you do, please make sure you have installed R and RStudio before you arrive. You don’t need to have any other familiarity with R.
Wed, Oct 31, 2018
15:00-16:00
TBA
Ben Jones
University of Glasgow
Writing Registered Reports (slides)
Publication bias distorts the scientific record and can give a false impression of the robustness of published results. Publication bias is neutralised in Registered Reports, a new publication format in which the rationale, methods, and analysis plan are peer reviewed prior to data collection or analyses and an in principal acceptance can be given. This short talk will introduce and highlight useful online resources for writing Registered Reports. I will also discuss some of the misconceptions about the format and discuss the key lessons I have learnt from preparing my first Registered Reports.
Wed, Oct 24, 2018
14:00-15:00
Level 5, Seminar Room
Dale Barr
University of Glasgow
Mixed-effects modeling of temporal autocorrelation: Keep it minimal
Linear models assume the independence of residuals. This assumption is violated in datasets where there is temporal autocorrelation among the residuals. Recently, Baayen, Vasishth, Kliegl, and Bates (2017) presented evidence for autocorrelation in three psycholinguistic datasets, and showed how Generalized Additive Mixed Models (GAMMs) can be used to model these autocorrelation patterns. However, there is currently little understanding of the impact of autocorrelation on model performance, and the extent to which GAMMs improve (or impair) inference. Through Monte Carlo simulation, we found that mixed-effects models perform well in the face of autocorrelation, except when autocorrelation is confounded with treatment variance. GAMMs did little to improve power, and in fact, the use of factor smooths dramatically impaired power for detecting effects of any between-subjects factors. These results suggest GAMMs are only needed in special cases.
Wed, Oct 10, 2018
14:00-15:00
Level 5, Seminar Room
Christoph Schild
Institute for Psychology, University of Copenhagen
Introduction to the formr survey framework in R formr, (slides)

Formr is a free and open source survey framework which supports a broad range of study designs. Formr surveys have a responsive layout (i.e. they also work on phones and tablets) and the data can easily be shared via spreadsheets. Further, studies can be linked to the OSF. Because of the integration with R, formr allows researchers to use a familiar programming language to enable complex features. For more details and information please see https://formr.org

Wed, Oct 03, 2018
15:00-16:00
Level 5, Seminar Room