Isaac Bernat

Senior Software Engineer

Experience

Sabbatical Barcelona, Spain
Aug 2024 - current

An intentional period of professional development focused on deepening expertise in AI/LLM-driven engineering, system design and modern development workflows. Key activities include:

  • System Design: Authored a comprehensive, production-level technical design for a heuristic-based Spam Classification Engine, demonstrating a pragmatic approach to building new features within legacy systems.
  • AI/LLM Application & Tooling: Developed several open-source projects, including a data pipeline using Python and LLMs for a generative art archive and maintaining the popular netflix-to-srt tool (800+ stars).
  • Continued Learning: Completed specialized courses on AI/LLM applications from DeepLearning.AI and Coursera to stay current with emerging technologies.

A curated selection of GitHub Projects from this period is detailed below.

Preply Barcelona, Spain
Mar 2020 - Jul 2024
Senior Backend Engineer I (P7, Staff-Equivalent)

Jun 2022 – Jul 2024

  • Business Impact: Drove the "Postpone Billing" experiment, the single most profitable initiative of 2023 (out of 215 A/B tests), delivering a +3% global Gross Margin lift. Championed the project for three quarters against initial resistance, turning short-term negative metrics into a massive long-term win. (Read the Case Study)
  • Strategic Foresight: Identified critical risks in a major company initiative ("Subs Direct") and formally advocated for a phased, iterative approach to mitigate over 10k+ hours in potential wasted development. The core of my proposed strategy was eventually adopted after the initial "big bang" approach proved too complex.
  • Cross-Functional Leadership: As the engineering representative for the Subscription Experience team I held bi-weekly stakeholder meetings with Finance, Data and CRM teams to coordinate on critical projects, ensuring data models and financial reporting remained accurate during high-stakes experiments. I also supported engineering-wide quality by conducting technical interviews for multiple backend roles.
  • Proactive System Ownership: Identified and led the mitigation of critical bugs outside my team's domain, including a flaw costing over $21k USD/week and a celery task spamming the queue with over 1 million executions per hour.

Backend Engineer III (P6, Senior)

Mar 2020 – Jun 2022

  • Founding the Subscription Model: As the sole backend architect on the MVP, took on a high-risk challenge and led the transition from a package-only model to subscriptions. This foundational work grew to process over 200k+ monthly renewals and was instrumental in securing a Series C funding round. (Read the Full Case Study)
  • Incident Command & Observability: Commanded and documented 15+ production incidents, including a SEV-2 that went undetected for 16 hours and led to a new, mandatory pre-launch monitoring policy for all high-risk features company-wide. (Read the Incident Retrospective)
  • Team Scaling & Mentorship: After the MVP's success, helped build the dedicated Subscriptions team by interviewing 13 candidates and hiring 2 foundational engineers. Improved team agility by facilitating sprint retrospectives, cutting meeting time by 50% while increasing engagement.
  • Modernization & Performance: Led the backend migration of the high-traffic (1M+ MAU) Q&A application from Django templates to a GraphQL API. This initiative drastically improved performance (e.g. page load time reduced by ~85% and Googlebot crawl rate increased by 8x) and unblocked a series of A/B tests that resulted in a +300% lift in New Paying Customers from the blog and an +800% CVR increase on the Q&A user flow.
Ivbar Institute Stockholm, Sweden
Mar 2018 - Mar 2020
Fullstack Engineer

Jan 2019 – Mar 2020

  • Technical Expertise & Communication: Selected to speak at PyCon Sweden 2019, presenting a deep-dive on code optimization that demonstrated a >1018x performance improvement on a practical problem.
  • Product Contribution: Stepped into a full-stack role to meet team needs, contributing to both a Python backend and a React-based frontend for visualizing complex medical case-mix decision trees.
Backend Engineer

Mar 2018 – Jan 2019

  • Healthcare Data Processing: Architected a core data processing engine for the Swedish National Quality Registry for Breast Cancer (NKBC), enabling the analysis of treatment outcomes across different hospitals to identify best practices.
  • Data Privacy & Anonymity: Implemented MECE-compliant privacy filters to ensure patient k-anonymity in sensitive medical datasets, a critical, non-negotiable requirement for project viability and ethical data handling.
Productos Aditivos Montcada, Spain
Jun 2016 - Jan 2018

Assumed a dual CTO/CFO leadership role in a medium-sized enterprise, driving both technology modernization and financial optimization initiatives.

  • Financial Strategy & Negotiation: Secured significant cost savings by renegotiating multi-year commercial loans, reducing interest rates by 35% on average and saving the company over 64k EUR. Also identified and rectified multi-thousand-euro payroll errors.
  • Technology & Vendor Management: Oversaw a critical ERP migration and managed the company-wide transition to the new Spanish SII VAT tax reporting system, ensuring business continuity and compliance.
  • Data-Driven Business Intelligence: Initiated and led a market intelligence project using customs data to analyze competitor and supplier trends, providing key strategic insights to leadership that were previously unavailable.
Wrapp Stockholm, Sweden
Oct 2012 - Jun 2016

Progressed from an early Backend Engineer to a key Fullstack Tech Lead during a major company pivot and technical transformation from a legacy Python 2.7 (Twisted) monolith to a modern Go/Python microservice architecture. I was also selected to speak at the inaugural PyCon Sweden 2014.

Fullstack Engineer & Tech Lead

Sep 2014 – Jun 2016

  • DevOps & Tooling: Architected and built "Opsweb," a mission-critical internal deployment system using Dart for managing canary releases of our 50+ microservices to AWS, complete with health checks and automated rollbacks.
  • Product Engineering: Contributed to the "Wrappmin" merchant dashboard (React), focusing on the backend APIs for mapping credit card transactions to offers.
  • Microservice Ownership: Created and maintained critical microservices in both Python (users) and Go (offers), coordinating with multiple contributors to ensure stability and performance.
Career Progression & Key Contributions

Oct 2012 – Sep 2014

  • Data Engineering: Enhanced and optimized the company's ETL pipeline for our Amazon Redshift data warehouse. Built real-time business and system KPI dashboards.
  • Frontend Engineering: Owned and expanded the user-facing email service, leveraging an A/B testing framework to improve user engagement and deliver over 1M newsletters weekly.
  • Backend Foundations: As one of the initial engineers, architected and implemented the company's first content recommendation engine from scratch.
FXStreet Barcelona, Spain
Feb 2012 - Jul 2012

Contributed to the development and maintenance of a high-traffic, multi-language financial news portal on Microsoft Azure using a C#/.NET stack.

  • Internationalization (i18n): Handled complex front-end challenges related to localization, ensuring UI/UX consistency across 17 languages, including right-to-left scripts (Arabic) and character sets with different spacing requirements (Russian, Japanese).
  • Platform Maintenance: Maintained and developed features for a global user base of forex traders, working within a Mercurial (Hg) version control system.

Case Studies

A Failed Experiment: Key Lessons from Yearly Subscriptions

A retrospective on a failed experiment to launch yearly subscriptions. This project became a powerful lesson in the importance of upfront product validation and the engineering principle of YAGNI ("You Ain't Gonna Need It"), leading to a more pragmatic and data-driven approach in subsequent work.

Read the full retrospective ↓

The Challenge: The "Obvious" Next Step

Following the immense success of our initial 4-week subscription model, the next logical step seemed to be launching a yearly subscription option. The hypothesis had several layers: we could increase long-term user commitment, create a more predictable annual revenue stream and further reduce churn by offering a compelling discount for a year-long commitment.

While we had qualitative data from user research teams suggesting this was a desired feature, the team, myself included, moved into the implementation phase with a high degree of optimism, without fully scrutinizing the underlying business and logistical assumptions.

The Technical Approach: A Mistake in Foresight

Anticipating that the business would eventually want other billing periods (e.g. quarterly for academic terms or summer programs), I made a critical technical error: I over-engineered the solution.

Instead of building a simple extension to support period=yearly, I designed a highly flexible system capable of handling any arbitrary billing duration. This added significant complexity to the codebase, testing and deployment process. My intention was to be proactive and save future development time, but it was a classic case of premature optimization.

The Outcome: A Failed Experiment and Valuable Lessons

We launched the A/B test, but the results were clear and disappointing: the adoption rate for the yearly plan was negligible. The discount we could realistically offer, after accounting for tutor payouts and our own margins, was simply not compelling enough for users to make a year-long financial commitment.

The feature was quickly shelved. The complex, flexible backend system I had built became dead code. This failure was a powerful learning experience that fundamentally improved my approach as an engineer and technical lead.

Learning 1: Validate the Business Case Before Building

The project's failure could have been predicted and avoided with a more rigorous pre-mortem and discovery phase. We jumped into building without asking the hard questions first.

  • Tutor Viability: The entire premise rested on offering a discount. We never validated if enough tutors were willing to absorb a significant portion of that discount. The handful who agreed to the pilot did so only after heavy negotiation, with the company subsidizing most of the cost—a model that was completely unscalable.
  • Logistical Complexity: We hadn't solved the critical operational questions. What happens if a student's tutor leaves the platform six months into a yearly plan? The processes for refunds, tutor reassignment and the accounting implications were undefined, creating massive downstream risk for our Customer Support and Finance teams.
  • Relying on Overly Optimistic Data: I learned to be more critical of qualitative user research that isn't backed by a solid business case. I took the initial presentations at face value, without questioning the difficult financial and operational realities.

This taught me to insist on a clear, data-backed validation of the entire value chain—not just user desire—before committing engineering resources.

Learning 2: The True Meaning of YAGNI

My attempt to build a "future-proof" system was a direct violation of the "You Ain't Gonna Need It" principle. The extra effort and complexity I added not only went unused but also made the initial build slower and riskier. This experience gave me a deep, practical appreciation for building the absolute simplest thing that can test a hypothesis. It's not about being lazy; it's about being efficient and focusing all engineering effort on delivering immediate, measurable value.

This project, more than any success, shaped my pragmatic engineering philosophy and my focus on rigorous, upfront validation.

CI/CD Optimization: Driving $31k in Annual Savings with a 1-Day Fix

Identified a key inefficiency in our CI/CD pipeline and led a data-driven initiative to fix it. This simple change, implemented in one day, resulted in an 80% reduction in unnecessary test runs, saving over $31,000 USD annually in infrastructure costs and hundreds of hours in developer wait time.

Read the full retrospective ↓

The Challenge: An Inefficient and Expensive CI Pipeline

In a May 2022 engineering all-hands meeting, a presentation on our infrastructure costs revealed a surprising fact: 18% of our entire AWS EC2 spend was dedicated to CI/CD. This sparked an idea. Our process ran a comprehensive, 15-minute unit test suite on every single commit, including those in Draft Pull Requests.

This created two clear problems:

  1. Financial Waste: We were spending thousands of dollars every month running tests on code that developers knew was not yet ready for review.
  2. Developer Friction: The Jenkins queue was frequently congested with these unnecessary test runs, increasing wait times for developers who actually needed to merge critical changes.

My Role: From Idea to Impact

As the originator of the idea, my role was to validate the problem, build consensus for a solution and coordinate its rapid implementation.

The Solution: A Data-Driven, Consensus-First Approach

My hypothesis was that developers rarely need the full CI suite on draft PRs, as they typically run a faster, local subset of tests. A simple change to make the CI run on-demand would have a huge impact with minimal disruption.

My approach was fast and transparent:

  1. The Proposal: I framed the solution in a simple poll in our main developer Slack channel. The message was clear: "POLL: wdyt about only running tests on demand for Draft PRs? ... This could help reduce [our AWS costs]. ... We could type /test instead." I also credited the engineer who had already prototyped an implementation, building on existing team momentum.
  2. Building Consensus: The response was immediate and overwhelmingly positive. Within a day, the poll stood at 20 in favor and only 2 against. With this clear mandate, we moved forward.
  3. Rapid, Collaborative Implementation: I coordinated with the engineer from the Infrastructure team who had built the prototype. We ensured the new workflow was non-disruptive: developers could still get a full test run anytime by typing the /test command. We had the change fully implemented and ready for review the same day.

The Results: Immediate and Measurable Savings

The impact of this simple change, validated by data after three months of operation, was significant:

  • Drastic Reduction in Waste: We saw an 80% reduction in test runs on draft PRs (3,990 PRs without a command vs. 1,060 with one).
  • Verified Financial Savings: With a calculated cost of $1.95 per test run, this translated to immediate savings of over $2,600 USD per month, or an annualized saving of over $31,000 USD.
  • Improved Developer Productivity: The Jenkins queue became significantly less congested, saving hundreds of collective engineering hours per month that were previously lost to waiting. This directly translated to faster feedback loops and a more agile development cycle.

This project was a powerful demonstration of how a single, data-backed idea, when socialized effectively, can be implemented rapidly to deliver a massive, measurable return on investment by removing friction and eliminating waste.

Incident Command: Turning a Personal Mistake into Systemic Improvements

A retrospective on a SEV-2 incident where I took full ownership of a pre-launch misconfiguration for a critical MVP. This case study details the methodical response process and the key systemic improvements that resulted, turning a personal mistake into a valuable lesson for the entire engineering organization on the non-negotiable importance of observability.

Read the full retrospective ↓

The Challenge: A Pre-Launch SEV-2 Incident (Feb 2021)

Days before the planned launch of the company-defining Subscriptions MVP, we needed to conduct final Quality Assurance on our CRM email flows. This testing had to be done in the production environment to validate the integration with our email provider.

Due to a critical misunderstanding of our internal A/B testing framework's UI, I incorrectly configured the experiment. I believed I was targeting a small whitelist of test users, but I had inadvertently set the experiment live for 100% of eligible users.

The immediate impact was that thousands of customers were exposed to an incomplete, unlaunched feature. The partial experience consisted of incorrect copy promising auto-renewal and a different set of purchase plan sizes.

My Role: Incident Owner and Scribe

As the owner of the feature and the person who made the mistake, I took immediate and full responsibility. My role during the incident was twofold:

  • As the Incident Owner, I was responsible for coordinating the response, assessing the impact and driving the technical resolution.
  • As the designated Scribe, I was responsible for maintaining a clear, timestamped log of all actions and communications, ensuring we would have a precise record for the post-mortem.

The Response: A Methodical Approach Under Pressure

The incident went undetected for 16 hours overnight simply because we had no specific monitoring in place for this new flow. Once it was flagged the next morning, my response followed a clear hierarchy:

  1. Immediate Mitigation: The first action was to stop the user impact. I immediately disabled the experiment in our admin tool, which instantly reverted the experience to normal for all users and stopped the "bleeding."

  2. Diagnosing the Blast Radius: With the immediate crisis averted, I began the diagnosis myself. I queried our database's SubscriptionExperiment table and quickly identified that ~250 users had been incorrectly enrolled, far more than the handful of test accounts we expected.

  3. Resolution and Cleanup: I wrote and deployed a data migration script to correct the state for all affected accounts. This ensured that no user would be incorrectly billed or enrolled in a subscription and that our A/B test data for the upcoming launch would be clean.

The incident was fully resolved in under an hour from the time it was formally declared.

The Outcome: Systemic Improvement from a Personal Mistake

While we successfully corrected the immediate issue, the true value of this incident came from the blameless post-mortem process that I led. The 16-hour detection delay became the central exhibit for a crucial change in our engineering culture.

The post-mortem produced several critical, long-lasting improvements:

  • Improved Tooling: We filed and prioritized tickets to add clearer copy, UX warnings and a "confirmation" step to our internal experimentation framework to prevent this specific type of misconfiguration from ever happening again.
  • A New Engineering Rule: We established a new, mandatory process: any high-risk feature being tested in the production environment must have a dedicated monitoring dashboard built and active before the test begins.
  • A Foundational Personal Learning: I had personally made the trade-off to deprioritize the monitoring and observability tickets for the MVP to meet a tight deadline. This incident was a powerful, firsthand lesson that observability is not a "nice-to-have" feature; it is a core, non-negotiable requirement for any critical system. This principle has fundamentally shaped how I approach every project I've led since.

This incident, born from a personal mistake, became a catalyst for improving our tools, our processes and my own engineering philosophy.

Proactive Ownership: A UX Fix for a +27% GMV Lift

Identified a simple user experience mismatch on our mobile homepage, proactively proposed a low-effort A/B test to a different team and drove a +27% increase in Gross Merchandise Value (GMV) from the affected user segment. This case study is a testament to the power of looking beyond assigned tasks and thinking like an owner of the entire product.

Read the full retrospective ↓

The Challenge: A Simple Observation, A Big Opportunity

While working on a backend task, I was reviewing our platform's user flow and made a simple observation. Our homepage used the same 'hero image' for all users: a person on a laptop. While this was perfectly appropriate for desktop visitors, it struck me as a subtle but significant disconnect for a user visiting our site on their phone. My hypothesis was that this 'one-size-fits-all' approach was creating a subconscious barrier, making the product feel less relevant to mobile users and potentially harming conversion.

My Role: Proactive Contributor

This was a clear example of an opportunity that fell outside my direct responsibilities and team's domain. My role was not to implement a fix, but to act as a proactive owner of the overall product experience. This meant validating my observation with data and building a compelling, low-friction proposal for the team that actually owned the homepage.

The Solution: A Data-Backed, Easy-to-Say-Yes-To Proposal

I knew that simply flagging the issue in a Slack channel would likely result in it being lost in the backlog. To drive action, I followed a three-step process:

  1. Validate with Data: I partnered with a data analyst to confirm my hunch. We reviewed engagement metrics and confirmed that our mobile user segment indeed had a lower conversion rate compared to desktop users.
  2. Formulate a Hypothesis: I framed my observation as a clear, testable hypothesis: "Showing a mobile-centric image to mobile users will create better resonance, leading to increased engagement and conversion."
  3. Create a Low-Effort Proposal: I wrote a brief, one-page document for the relevant Product Manager. I didn't ask them to commit to a major roadmap change; I simply proposed a low-effort, high-potential A/B test. By doing the initial data validation and presenting a clear hypothesis, I made it as easy as possible for them to say "yes" and add it to their next sprint.

The Results: Outsized Impact from a Small Change

The homepage team was receptive and ran the A/B test. The results were immediate and exceeded all expectations. The mobile user cohort that saw a new, mobile-centric hero image demonstrated a massive improvement in key metrics:

  • +27% increase in Gross Merchandise Value (GMV)
  • +20% increase in total hours purchased

This initiative was a powerful lesson in the value of thinking like an owner. It proved that sometimes the most significant product improvements don't come from complex, multi-month engineering epics, but from a simple, user-centric observation and the initiative to see it through. It reinforced my belief that every member of a team has the ability to drive impact if they are empowered to look beyond their next ticket.

Strategic Alignment: A Lesson in "Disagree & Commit"

Identified a growing, underserved user segment (Music learners) experiencing significant friction due to product limitations. This case study details a data-backed advocacy effort to improve their experience, ultimately serving as a lesson in aligning individual insights with broader company strategy and the principle of "disagree and commit."

Read the full retrospective ↓

The Challenge: Overlooking Valuable Niche User Segments

Our platform's primary focus was language learning. However, I observed a significant and growing user base for other subjects, particularly Music. While these were not core growth areas, they consistently ranked in our top 10 most popular subjects by hours purchased, representing a substantial, profitable revenue stream.

My concern was that these users faced unnecessary friction due to product limitations. For example:

  • Lack of Specialization Filters: Unlike languages (e.g. "Business English"), Music users couldn't search for "Guitar Tutor" or "Piano Tutor", forcing them to manually scroll through long lists and click into profiles.
  • Inflexible Subscription Frequencies: Many adult music learners preferred bi-weekly lessons due to practice time, but our subscription plans only offered weekly frequencies, pushing them towards less convenient options or even churn.

My hypothesis was that these overlooked user experience gaps were leading to unnecessary churn and hindering growth within these valuable niche segments.

My Role: Data-Driven Advocate

This initiative fell outside my direct team's roadmap. My role was to act as a data-driven advocate for these users, identifying the problem, quantifying the opportunity and proposing simple solutions to improve their experience.

The Solution: Building a Case for Niche Improvements

I approached this by building a clear, data-backed case:

  1. Quantify the Opportunity: I gathered data on the total hours and GMV generated by Music subject, demonstrating their substantial contribution to the company's bottom line, despite not being a primary growth focus.
  2. Highlight User Friction: I presented specific examples of user experience issues, backed by anecdotal feedback from customer support, illustrating how our generic platform was failing these specific users.
  3. Propose Low-Effort Solutions: I outlined simple backend changes (e.g. adding instrument specializations, allowing bi-weekly frequency options for specific subjects) that I believed could deliver high ROI by reducing friction and improving retention in these segments.

I presented this proposal to product leadership, emphasizing the potential for "easy wins" by serving an existing, valuable user base better.

The Outcome: Strategic Alignment and "Disagree and Commit"

While the product leadership acknowledged the validity of my insights and the value of these user segments, they made a strategic decision to maintain laser-focus on core language growth. Resources were explicitly allocated away from non-core subjects to maximize impact in the primary business area.

The proposed features were not prioritized in the roadmap.

This project, while not resulting in a shipped feature, provided me with a crucial lesson in "disagree and commit." I learned that:

  • Advocacy is Essential, but Strategy is King: It's vital to advocate fiercely for what you believe is right, especially when backed by data.
  • Respect Broader Strategic Alignment: Once a strategic decision is made, even if you disagree with it, it's essential to understand the rationale and align your efforts with the broader company goals.
  • Professional Conduct: I documented my research and proposal in our internal knowledge base, ensuring the insights were preserved for future consideration. I then refocused my full energy on the prioritized roadmap items.

This experience reinforced the importance of strategic clarity and demonstrated my ability to contribute insights, influence discussions and ultimately commit professionally to the agreed-upon direction.

System Design

Reimbursement API (2017)

Design and implementation of a healthcare reimbursement system.

  • Architecture: Demonstrates a clean microservice architecture using containerized Python and PostgreSQL services orchestrated with Docker. The design follows REST principles and isolates concerns for scalability.
  • Strategic Decisions: The accompanying documentation outlines clear assumptions, API design with versioning considerations, error handling strategies and a thoughtful discussion on performance trade-offs, showcasing a mature approach to software development.

View on GitHub →

URL Shortener (2012)

A high-performance URL shortener, implemented in under 100 lines of Python for Google App Engine.

  • Architecture: The solution uses base64-encoded database IDs to generate minimal-length URLs and employs two caching strategies to achieve near-zero database reads for repeat lookups.
  • Scalability: The design document includes a detailed analysis of long-term scaling, covering CDN usage, database sharding, consistent hashing and other production-grade considerations, demonstrating foresight beyond the initial implementation.

View on GitHub →

Personal & Open Source Projects

Basepaint Media Pipeline

An archival project for the generative art community basepaint.xyz. This project features a pipeline of Python scripts to fetch, process and enrich artwork metadata using LLMs. The final output is a series of high-quality, printable PDFs containing the art, creation stats and AI-generated descriptions of each piece, making the collection accessible and well-documented.

View on GitHub →

Rhythm Radar (2016)

An experimental real-time rhythm visualizer built with D3.js. This proof-of-concept uses polar coordinates to create an intuitive "radar" display for drum patterns, offering a novel alternative to traditional linear notation. The project was inspired by a desire to to understand complex rhythms in real time, especially when multiple instruments and loops are involved.

View on GitHub →

Docker Remote Debugging

A practical guide and working example for debugging Python code remotely within a Docker container. This project provides a clear, step-by-step solution using pudb and telnet, complete with sample code and configuration, to improve development workflows in containerized environments.

View on GitHub →

Moon Cycle Data Analysis

A data-centric project providing daily moon phase data from 1800-2050 in a clean CSV format. The repository includes two Python scripts: one using the ephem library for high-accuracy calculations and a dependency-free version for faster, less precise estimates. This demonstrates an understanding of both scientific accuracy and practical performance trade-offs.

View on GitHub →

Tinymem

A Simon-inspired memory game for the Thumby, a keychain-sized programmable console.

  • Technology: Written in under 50 lines of MicroPython, this project demonstrates resource-constrained development, making use of the device's audio, sprites and input buttons.
  • Purpose: Served as a proof of concept for a complete, playable game with a tiny code footprint. It was the subject of a presentation at PyDayBCN 2023.

View on GitHub →

Pycrastinate (2014)

A language-agnostic tool for managing TODO and FIXME comments across entire codebases, presented in full at PyCon Sweden 2014 and abridged at EuroPython 2014.

  • Architecture: Built as a configurable pipeline, the tool finds tagged comments, enriches them with git blame metadata (author, date) and generates reports.
  • Purpose: This early project was an exploration into building developer tooling and demonstrates an early interest in code quality and maintainability.

View on GitHub →

Presentations

Education & Interests

M.Sc. from NTNU/UPC, fluent in English and active in community leadership.

See more

M.Sc. in Informatics Engineering, Highest Honours

Norwegian University of Science and Technology (NTNU) & Universitat Politècnica de Catalunya (UPC), 2011

  • Completed a 5-year, 300+ ECTS program in Informatics Engineering at UPC, with specializations in Data Management and Software Engineering.
  • Authored a Master's Thesis on Test-Driven Conceptual Modelling at NTNU, which received the highest possible grade (A with Highest Honours) from both institutions.

Languages

  • Native: Catalan, Spanish
  • Full Proficiency: English (10+ years professional experience; Cambridge CPE/C2)
  • Conversational: Swedish (6 years residency; A2 certified)
  • Basic: Norwegian (EILC Bokmål course)

Leadership & Interests

  • Community Leadership (VP & Treasurer, 2006-2010): Co-managed a non-profit cultural association aimed at providing positive leisure alternatives for local youth. Oversaw budgets, secured public funding and organized community events, including annual LAN parties and boardgame weekends.
  • Personal Interests: I enjoy classical music, performing on piano, harpsichord and participating in organ masterclasses. I'm also an avid reader and fond of indie games, having developed PoCs for Thumby like a Simon clone and a Pong clone.

Credentials

Degrees, diplomas and language certificates are available for review on GitHub.