Alexander Refsum Jensenius

I am Professor of Music Technology at the University of Oslo, Director of RITMO, Head of the fourMs Lab and Director of MishMash Centre for AI and Creativity. Editor of A NIME Reader and Sonic Design. Author of Sound Actions and Still Standing.

Hyperscanning with fNIRS

Today, we had an engaging fNIRS workshop at RITMO. One of the tasks we explored was “hyperscanning,” a term that confused several of the non-psychology people present. I was also bewildered when I encountered this term for the first time, so here is a short blog post to explain what it is. What is Hyperscanning? The short answer is that hyperscanning is the simultaneous recording of neural or physiological activity from two or more people to study interpersonal brain dynamics. ...

January 15, 2026 · 3 min · 548 words · ARJ

Multi-page printing using the terminal

It feels somewhat old-school to write a blog post about printing, but this is mainly a note-to-self and a reminder for others who want to print a poster with an A4 printer. I don’t print very often, but I had to print an A2-sized poster from home, where I only have an A4 printer. I couldn’t find any options to print across multiple pages in the Ubuntu printer dialogue. Testing on a Windows computer didn’t help either. But of course, there are terminal solutions. ...

January 14, 2026 · 1 min · 174 words · ARJ
Tom Sachs’s Model Thirty-Six (2014), experimental stereos that challenge the established notions of form and function, in the collection of San Francisco Museum of Modern Art.

Integrating Musicology, Psychology, and Technology in Education

One of my main goals while teaching MUS2640 – Sensing Sound and Music last semester was the integration of perspectives from musicology, psychology, and technology. The main point is to introduce the basics of music psychology and music technology, two disciplines that are quite distinct in most places, but which we at the University of Oslo have a tradition of combining. In this post, I explain the rationale. Three “logies” One of my main arguments is that we should consider the three “logies” involved in this course: ...

January 13, 2026 · 8 min · 1535 words · ARJ
Two people in a motion capture lab

Kinetics and Kinematics

People often confuse kinetics and kinematics. What are they, and which one can you derive from motion capture data? Kinematics Kinematic analysis concerns the geometric aspects of motion, including position, velocity, and acceleration. When dealing with motion capture, you typically get either position (from a camera-based system) or something similar to acceleration (from accelerometers). These are related, and since I have now figured out how to make equations here on the blog, here you get a summary of high school physics: ...

January 11, 2026 · 2 min · 359 words · ARJ
Stones

Octaves aren't Rhythmic

I see that the concept of “tempo octave” is being used by some researchers in the music information retrieval (MIR) community. This is a confusing term from a musical perspective. Here I explain why this is a bad idea. Octaves An octave is a core term in (Western) music theory related to describing intervals, relationships between two notes (and tones!) with a frequency ratio of 2:1. Here is an example of an octave: ...

January 10, 2026 · 2 min · 351 words · ARJ

Videograms for Video Navigation

Yesterday, I wrote about some reflections I had during Olgerta Asko’s PhD defence. Today, while chopping up the video recording to put on the RITMO web page, I thought that it might help to use a videogram to assist with segmentation. Videograms A videogram is similar to a motiongram, the main difference being that the videogram uses the regular video image as input to the “compression” instead of a motion video. Both give an impression of what is in a video file over time. We have functions for creating both videograms and motiongrams in the Musical Gestures Toolbox, but they are optimised for using other functions in the toolbox. ...

January 10, 2026 · 3 min · 453 words · ARJ

From Generalisation to Deliberation

Today, we had the PhD defence of Olgerta Asko at RITMO. Her research is super interesting in itself (check out this feature story for an overview). This blog post is following up on one of the points she made during her trial lecture that I hadn’t thought about before: the difference between generalisation, inference, and deliberation. Towards deliberation Olga argued that current AI—here understood as large language models (LLMs)—are based on generalisation. They extract patterns from a lot of data and apply them broadly. As we have seen with recent commercial products and as I have explored in many ways on this blog, LLMs excel at this task. ...

January 9, 2026 · 5 min · 932 words · ARJ
A large, colorful wave from left to right, boxes with years and descriptions

The History and Future of Creative AI

Yesterday, I wrote about the history and future of AI in general. Today, I am continuing my explorations by examining the role of AI in the arts and the impact of the arts on AI. The exploration is based on this notebook, with 60 sources collected by NotebookLM. AI in the Arts AI has a multi-generational history in the arts, transitioning from centuries-old mechanical automatons to symbolic rule-making and eventually to deep learning-based approaches. In the following, I will go through some of the works picked out by NotebookLM. Some of them I know well, others I hadn’t heard about before. A complete list of detected works is at the end of the post. ...

January 4, 2026 · 18 min · 3724 words · ARJ

The History and Future of AI

Due to MishMash, I am nowadays lecturing on AI, music, and creativity several times a week. I usually include a brief overview of machine learning history, mainly to explain that ChatGPT didn’t come out of nowhere but was the result of decades of research. To check that my story holds and to get a few more critical years and names in place. This blog post summarizes the brief history of AI to date. ...

January 3, 2026 · 10 min · 2092 words · ARJ

Using Google NotebookLM to summarize my academic results

I find Google’s NotebookLM to be one of the most powerful tools for researchers these days. Its core function is to work with the material you upload. Hence, it differs from ChatGPT and other tools that invent things on their own or search the web. It also has many different types of reports, including audio and video. But how accurate is it? Testing NotebookLM To test NotebookLM on some material I know very well, I decided to provide it with a collection of public data about myself, more specifically, what I have registered in the Norwegian NVA system, which is the database we use in Norway to register all our academic activities. This includes academic publications, public lectures, artistic works, interviews, etc. In my case, that contains around 1000 entries dating back to 2000. ...

January 2, 2026 · 10 min · 1963 words · ARJ