Skip to content
View tmm88's full-sized avatar

Block or report tmm88

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 250 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
tmm88/README.md

Tiago Morais Morgado


About Me

I am a composer, improviser, backend developer, digital signal processing specialist, and sound & computer graphics expert, seamlessly blending aavant-garde artistry with cutting-edge technology. With advanced degrees in computer music and sonology, paired with classical training in viola and piano, I craft emotionally resonant, interdisciplinary works that bridge music, code, and visuals. My practice draws inspiration from the Darmstadt, Cologne, and Paris avant-garde scenes, channeling their experimental ethos into innovative compositions, performances, and software solutions.

Education

  • Master’s in Sonology (HBO Level 7, 2011–2013)
  • Bachelor’s in Computer Music (HBO Levels 5–6, 2008–2011)
  • Partial Musicology Bachelor’s (Two universities, 2006–2008)
  • HBO Levels 2 and 4 in Classical Music Performance (2000–2006)

Professional Expertise

Composer

Crafting original, emotionally evocative music across genres—classical, electronic, experimental, and cinematic—designed to resonate with diverse audiences.

  • Creative Process: Composes using Ableton Live, Logic Pro, and LilyPond, tailoring works to narratives or moods. Integrates Python and AI (e.g., Magenta) for algorithmic composition, exploring novel harmonic and timbral landscapes.
  • Applications: Created scores for short films, video games, and immersive installations, including a generative ambient soundtrack for an interactive art exhibit.
  • Impact: Produces culturally relevant works performed in concert halls and digital platforms, with premieres on BBC Radio 3 and Antena 2.

Improviser

Delivering dynamic, spontaneous performances that captivate through real-time musical expression.

  • Performance Expertise: Skilled in live improvisation on piano, guitar, and modular synthesizers, responding to audience energy or collaborative cues.
  • Contexts: Performed at jazz ensembles, experimental festivals, and live-coding events using SuperCollider and Ableton Push for layered soundscapes.
  • Collaborative Artistry: Improvises with dancers and visual artists, integrating AI-driven visuals for multi-sensory experiences.

Backend Developer

Building scalable, high-performance systems to power innovative applications.

  • Technical Proficiency: Develops RESTful APIs and microservices with Node.js, Python (Flask/Django), and Java (Spring Boot). Manages SQL (PostgreSQL, MySQL) and NoSQL (MongoDB) databases.
  • DevOps Practices: Implements CI/CD with GitHub Actions, Docker, and Kubernetes, optimizing for AWS deployments.
  • Use Cases: Engineered backend systems for real-time audio streaming platforms, achieving low-latency communication via WebSocket.

Digital Signal Processing Specialist

Designing advanced audio processing algorithms for high-fidelity, real-time applications.

  • Core Skills: Develops DSP algorithms (e.g., reverb, pitch shifting) using C++ (JUCE), Python (NumPy, SciPy), and hardware-accelerated platforms like FPGAs via HLS.
  • AI Integration: Enhances DSP with TensorFlow for tasks like noise reduction, trained on custom audio datasets.
  • Applications: Built low-latency audio pipelines for music production, live performances, and VR/AR spatial audio.

Sound & Computer Graphics Expert

Fusing immersive audio with cutting-edge visuals for transformative experiences.

  • Audio-Visual Integration: Combines Three.js and Tone.js for synchronized 3D visualizations and spatial audio in VR and live performances.
  • Creative Coding: Uses p5.js and Processing for generative art, mapping audio parameters to real-time visuals.
  • Applications: Developed audio-reactive installations for museums and festivals, integrating with Unity and Unreal Engine.

Technical Skills

I design responsive, real-time systems integrating audio, visuals, and computation, with a focus on modular, open-source workflows.

Creative Coding

  • Audio Tools: Ableton Live, Max/MSP, SuperCollider, Pure Data, FMOD, Wwise, Pydub, TidalCycles
  • Graphics Tools: p5.js, Three.js, Processing, Unity, Unreal Engine, TouchDesigner, Shadertoy

Software Development

  • Languages: C++, Python, Java, Node.js, Ruby, Assembly
  • Frontend Frameworks: React, Vue.js, Angular, Vite, Three.js, p5.js
  • Python Frameworks: Flask, FastAPI, TensorFlow, OpenCV, scikit-learn, Matplotlib, Numba
  • Databases: PostgreSQL, MySQL, MongoDB, Redis
  • DevOps: Docker, Kubernetes, GitHub Actions, AWS, Azure

Embedded Systems

  • Platforms: Raspberry Pi, Arduino, Bela, Alinx, Xilinx FPGAs
  • DSP Code Conversion: SuperCollider/Shadertoy to C++ HLS, Python MyHDL, OpenCL

System Administration

  • Operating Systems: macOS, Windows, Arch Linux, Debian, Ubuntu, WSL2
  • Shell Scripting: Bash, PowerShell
  • Virtualization: VirtualBox, Parallels, QEMU

Content Management

  • CMS: WordPress, Drupal
  • Local Setup: Apache, MAMP, LAMP, XAMPP

Current Directions and Research

My work explores the intersection of avant-garde artistry and emerging technologies, inspired by David Sylvian’s Manafon and the experimental legacies of Darmstadt, Cologne, and Paris. I focus on three areas:

WiFi-Based Networked Performance Systems

  • Innovation: Developed WebRTC/WebSocket-based platforms for low-latency (sub-20ms) networked performances, premiered on BBC Radio 3 and Antena 2.
  • Applications: Enabled global ensembles to improvise seamlessly, integrating OSC with DAWs like Ableton Live.

AI-Driven Improvisation and Media Synthesis

  • Approach: Trained RNNs and transformer models (TensorFlow, PyTorch) for real-time audio-visual generation, premiered on WDR 3.
  • Impact: Created installations where audience movements drive AI-generated soundscapes and GLSL visuals.

Interconnected Multimedia Ecosystems

  • Framework: Built modular pipelines with Python (music21), JavaScript (Tone.js), and C++ (JUCE) for audio-visual composition.
  • Applications: Developed mixed-reality performances with Ambisonic audio and WebXR visuals for Paris festivals.

Connect and Collaborate

I’m passionate about creating groundbreaking projects at the nexus of music, technology, and art. Let’s collaborate on innovative compositions, software solutions, or immersive installations.

  • Email: selfdeterminedhermit@gmail.com (Responds within 24–48 hours)
  • Donations: tiagomoraimorgado2014@gmail.com (Support ongoing creative projects)
  • Phone: +351 934 446 355 (WhatsApp or calls, WEST UTC+1)
  • Social Platforms:
    • X: @selfdeterminedhermit
    • GitHub: github.com/selfdeterminedhermit
    • SoundCloud: soundcloud.com/selfdeterminedhermit
    • LinkedIn: linkedin.com/in/tiago-morais-morgado

Availability:

  • Open for full-time, part-time, or contract roles in music composition, sound design, software development, or DSP.
  • Seeking collaborations on experimental music, VR/AR/XR projects, or AI-driven art.
  • Available for consultations on networked systems, algorithmic composition, or audio-visual integration.

Let’s weave sound, code, and visuals into transformative experiences. Reach out to discuss ideas or opportunities!

Pinned Loading

  1. TMM88-FRONTEND TMM88-FRONTEND Public

    TMM88-FRONTEND

    JavaScript 5

  2. TMM88-VIOLA-PIECES TMM88-VIOLA-PIECES Public

    TMM88-VIOLA-PIECES

    4

  3. TMM88-SHADERTOY TMM88-SHADERTOY Public

    TMM88-SHADERTOY

    GLSL 5

  4. TMM88_C4D TMM88_C4D Public

    TMM88_C4D

    4

  5. TMM88-SC3 TMM88-SC3 Public

    SuperCollider 4

  6. TMM88-DATASCRAPPING TMM88-DATASCRAPPING Public

    TMM88-DATASCRAPPING

    Python 3