Isaac Bernat

Staff Software Engineer

Experience

Independent Professional Development Barcelona, Spain
Aug 2024 - current

An intentional period of independent engineering, focused on shipping production-grade projects, modernizing backend workflows, and integrating privacy-first AI tooling. Key activities include:

  • AI Data Pipelines: Architected the basepaint archive using a production-grade custom Python orchestrator to manage resilient LLM ingestion. Features robust concurrency control with rate limiting and exponential backoff (tenacity, asyncio) and strict data validation (pydantic) to handle stochastic AI outputs.

  • Agentic CI/CD Engineering: Modernized the popular netflix-to-srt open-source tool (850+ GitHub Stars) via AI orchestration. This involved hosting distilled LLMs (Qwen3.5) locally in Orbstack containers to ensure a zero-data-leakage environment for generating multi-language unit tests (Python unittest & node:test) while keeping strict zero-dependency constraints.

  • Hardware-Constrained Polyglot: Engineered a unique 50 FPS real-time asymmetric multiplayer game in Lua for the Playdate console. Designed a custom particle system with object pooling (300+ independently moving particles with zero Garbage Collector churn). Officially selected for publication in the curated Playdate console Catalog.

  • System Design: Authored a comprehensive, production-level technical design for a heuristic-based Spam Classification Engine, a pragmatic approach to building resilient features within legacy ecosystems.

A curated selection of GitHub Projects from this period is detailed below.

Preply Barcelona, Spain
Mar 2020 - Jul 2024

Staff Backend Engineer

Jun 2022 – Jul 2024

As a Staff Engineer, I operated beyond my immediate team to shape the technical roadmap and lead complex, cross-functional initiatives. My role was to tackle the most ambiguous business problems, translate them into robust backend architectures, and drive them to completion.

  • Architectural Leadership & Monetization: Led the company's #1 most impactful A/B test of 2023, driving a +3% global Gross Margin lift. Redesigned the core subscription monetization logic, engineering an idempotent, cron-driven state machine with strict database-level locking to guarantee zero double-billing during asynchronous payment gateway failures. (Read the Full Case Study)

  • Proactive System Observability: Consistently identified and mitigated critical production bottlenecks outside my team's direct domain. Engineered a pragmatic rate-limiting circuit breaker to neutralize an automated transaction-fee arbitrage exploit, and traced/mitigated a runaway Celery task executing >1M times/hour.

  • Engineering Velocity & Cost Optimization: Championed an engineering-wide CI/CD optimization initiative. Identified a severe Jenkins queue bottleneck and drove an on-demand testing workflow for Draft PRs, reducing automated-test AWS EC2 infrastructure costs by 80% ($31k/year) and saving hundreds of collective engineering hours per month in the process. (Read the Full Case Study)

  • Strategic Foresight & Technical Translation: Acted as the technical DRI for cross-functional initiatives. Advocated against a high-risk monolithic launch strategy ("Subs Direct"), ultimately guiding a cross-functional pivot to an iterative release model, saving an estimated 6 months of engineering effort.

Senior Backend Engineer

Mar 2020 – Jun 2022

  • Founding the Subscription Platform: Served as the founding backend architect for the Subscription MVP. Designed the core schema and orchestrated cross-team integrations (Payments, CRM, Finance) that scaled to 200k+ monthly renewals. This foundation increased LTV-to-CAC from 1.4x to 3.0x and enabled a Series C funding round. (Read the Full Case Study)

  • Incident Leadership & Observability: Commanded and documented 15+ production incidents. Leveraged a critical SEV-2 post-mortem to champion a mandatory, company-wide pre-launch observability policy, shifting the engineering culture towards monitoring-first deployments. (Read the Incident Retrospective)

  • Team Scaling & Process Improvement: Played a key role in scaling the dedicated Subscriptions team by interviewing 13 candidates and hiring 2 of its foundational engineers. I also redesigned our sprint retrospective process, which cut meeting time by 50% while increasing actionable outcomes and team engagement.

  • Modernization & Database Performance: Led the backend migration of the high-traffic (1M+ MAU) Q&A application from legacy Django templates to a GraphQL API. Resolved database query bottlenecks to reduce page load time by >85%, unblocking experiments that drove a +300% lift in New Paying Customers.

Ivbar Institute Stockholm, Sweden
Mar 2018 - Mar 2020

Fullstack Engineer

Jan 2019 – Mar 2020

Stepped into a fullstack role to meet team needs, contributing to both a Python backend and a React-based frontend for visualizing complex healthcare data.

  • Technical Expertise & Communication: Selected to speak at PyCon Sweden 2019. Presented a deep-dive on bytecode-level and algorithmic optimization, demonstrating how to reduce O(n^3) bottlenecks to O(1) through memoization, search-space reduction, CPython profilers and other techniques.

  • Data Visualization: Engineered a React-based UI to display complex medical case-mix decision trees, empowering healthcare professionals to better understand and analyze treatment pathways.

Backend Engineer

Mar 2018 – Jan 2019

Developed privacy-first data processing systems focused on data integrity and the secure handling of sensitive patient information.

  • Healthcare Analytics: Architected the core data processing engine for the Swedish National Quality Registry for Breast Cancer (NKBC), a system enabling nationwide analysis of treatment outcomes across different hospitals to identify and share best practices.

  • Data Privacy & Security: Implemented MECE-compliant privacy filters to ensure patient k-anonymity in highly sensitive medical datasets. This feature was designed with the explicit constraint of operating within secure, on-premises hospital environments, a critical requirement for meeting strict data governance and ethical standards.

Productos Aditivos Montcada, Spain
Jun 2016 - Jan 2018

Technical Operations Lead

Led the technical modernization of core enterprise operations, driving data intelligence and managing critical infrastructure vendor relationships.

  • Data-Driven Business Intelligence: Engineered a custom market intelligence data pipeline, aggregating and analyzing international customs data to deliver strategic supply-chain forecasting on competitor/supplier volumes and pricing during global macroeconomic events (e.g., the months-long factory shutdowns during China's "Central Environmental Protection Inspection (CEPI)").

  • Technology & Vendor Management: Managed the technical vendor relationship for a highly constrained legacy ERP system. Oversaw the mandatory company-wide transition to the Spanish SII real-time tax reporting API system, ensuring business continuity and regulatory compliance.

Wrapp Stockholm, Sweden
Oct 2012 - Jun 2016

Progressed from an early, versatile engineer to a key Tech Lead during the company's critical "Wrapp 2.0" pivot, transforming the architecture from a legacy monolith to modern microservices. I was also selected to speak at the inaugural PyCon Sweden.

Tech Lead

Sep 2014 – Jun 2016

  • Microservice Architecture: Helped guide the pivot from a Python 2.7/Twisted monolith to a distributed system of 50+ microservices in Go and Python 3.

  • Core Service Ownership: Maintained critical backend services: users (Python) and offers (Go). I was responsible for their architectural consistency and code quality. For core offers service this meant overseeing 1,200+ commits from over 12 contributors.

  • Fintech Backend: Built the core backend APIs for mapping credit card transactions to merchant offers, contributing to "Wrappmin" merchant dashboard.

  • Developer Tooling: Designed and built "Opsweb", a mission-critical internal deployment system for managing canary releases. It directly improved developer velocity and site reliability with features like automated health checks and rollbacks.

Early Engineer (Backend, Data, Frontend)

Oct 2012 – Sep 2014

  • Fraud Prevention: Created a heuristic-based fraud prevention system to detect and block abuse of the platform’s gift card and coupon features, protecting the company and its customers from financial loss.

  • Data Engineering & Visualization: Architected and optimized the high-volume ETL pipeline for our Redshift data warehouse and built real-time KPI dashboards for business and system monitoring.

  • Complex Frontend Engineering: Evolved a comprehensive A/B testing email framework that delivered over 1M personalized newsletters weekly. This involved complex rendering optimizations and extensive cross-client compatibility testing to ensure a consistent user experience.

  • Remote Collaboration: Contributed effectively in an asynchronous, multi-time-zone environment with engineering teams in both Stockholm and San Francisco.

FXStreet Barcelona, Spain
Feb 2012 - Jul 2012

Contributed to a high-traffic, multi-language financial news portal on a Microsoft Azure and C#/.NET stack, serving a global user base of forex traders.

  • Internationalization (i18n): Handled complex front-end challenges related to localization, ensuring UI/UX consistency across 17 languages, including right-to-left scripts (Arabic) and character sets with different spacing requirements (Russian, Japanese).

  • Platform Development: Maintained and developed features on the core C#/.NET platform, working within a Mercurial (Hg) version control system.

Case Studies

A Failed Experiment: Key Lessons from Yearly Subscriptions

A retrospective on a failed experiment to launch yearly subscriptions. This project became a powerful lesson in the importance of upfront product validation and the engineering principle of YAGNI ("You Ain't Gonna Need It"), leading to a more pragmatic and data-driven approach in subsequent work.

Read the full retrospective ↓

The Challenge: The "Obvious" Next Step

Following the immense success of our initial 4-week subscription model, the next logical step seemed to be launching a yearly subscription option. The hypothesis had several layers: we could increase long-term user commitment, create a more predictable annual revenue stream and further reduce churn by offering a compelling discount for a year-long commitment.

While we had qualitative data from user research teams suggesting this was a desired feature, the team, myself included, moved into the implementation phase with a high degree of optimism, without fully scrutinizing the underlying business and logistical assumptions.

The Technical Approach: A Mistake in Foresight

Anticipating that the business would eventually want other billing periods (e.g. quarterly for academic terms or summer programs), I made a critical technical error: I over-engineered the solution.

Instead of building a simple extension to support period=yearly, I designed a highly flexible system capable of handling any arbitrary billing duration. This added significant complexity to the codebase, testing and deployment process. My intention was to be proactive and save future development time, but it was a classic case of premature optimization.

The Outcome: A Failed Experiment and Valuable Lessons

We launched the A/B test, but the results were clear and disappointing: the adoption rate for the yearly plan was negligible. The discount we could realistically offer, after accounting for tutor payouts and our own margins, was simply not compelling enough for users to make a year-long financial commitment.

The feature was quickly shelved. The complex, flexible backend system I had built became dead code. This failure was a powerful learning experience that fundamentally improved my approach as an engineer and technical lead.

Learning 1: Validate the Business Case Before Building

The project's failure could have been predicted and avoided with a more rigorous pre-mortem and discovery phase. We jumped into building without asking the hard questions first.

  • Tutor Viability: The entire premise rested on offering a discount. We never validated if enough tutors were willing to absorb a significant portion of that discount. The handful who agreed to the pilot did so only after heavy negotiation, with the company subsidizing most of the cost, a model that was completely unscalable.
  • Logistical Complexity: We hadn't solved the critical operational questions. What happens if a student's tutor leaves the platform six months into a yearly plan? The processes for refunds, tutor reassignment and the accounting implications were undefined, creating massive downstream risk for our Customer Support and Finance teams.
  • Relying on Overly Optimistic Data: I learned to be more critical of qualitative user research that isn't backed by a solid business case. I took the initial presentations at face value, without questioning the difficult financial and operational realities.

This taught me to insist on a clear, data-backed validation of the entire value chain, not just user desire, before committing engineering resources.

Learning 2: The True Meaning of YAGNI

My attempt to build a "future-proof" system was a direct violation of the "You Ain't Gonna Need It" principle. The extra effort and complexity I added not only went unused but also made the initial build slower and riskier. This experience gave me a deep, practical appreciation for building the absolute simplest thing that can test a hypothesis. It's not about being lazy; it's about being efficient and focusing all engineering effort on delivering immediate, measurable value.

This project, more than any success, shaped my pragmatic engineering philosophy and my focus on rigorous, upfront validation.

CI/CD Optimization: Driving $31k in Annual Savings with a 1-Day Fix

Identified a key inefficiency in our CI/CD pipeline and led a data-driven initiative to fix it. This simple change, implemented in one day, resulted in an 80% reduction in unnecessary test runs, saving over $31,000 USD annually in infrastructure costs and hundreds of hours in developer wait time.

Read the full retrospective ↓

The Challenge: An Inefficient and Expensive CI Pipeline

In a May 2022 engineering all-hands meeting, a presentation on our infrastructure costs revealed a surprising fact: 18% of our entire AWS EC2 spend was dedicated to CI/CD. This sparked an idea. Our process ran a comprehensive, 15-minute unit test suite on every single commit, including those in Draft Pull Requests.

This created two clear problems:

  1. Financial Waste: We were spending thousands of dollars every month running tests on code that developers knew was not yet ready for review.
  2. Developer Friction: The Jenkins queue was frequently congested with these unnecessary test runs, increasing wait times for developers who actually needed to merge critical changes.

My Role: From Idea to Impact

As the originator of the idea, my role was to validate the problem, build consensus for a solution and coordinate its rapid implementation.

The Solution: A Data-Driven, Consensus-First Approach

My hypothesis was that developers rarely need the full CI suite on draft PRs, as they typically run a faster, local subset of tests. A simple change to make the CI run on-demand would have a huge impact with minimal disruption.

My approach was fast and transparent:

  1. The Proposal: I framed the solution in a simple poll in our main developer Slack channel. The message was clear: "POLL: wdyt about only running tests on demand for Draft PRs? ... This could help reduce [our AWS costs]. ... We could type /test instead." I also credited the engineer who had already prototyped an implementation, building on existing team momentum.
  2. Building Consensus: The response was immediate and overwhelmingly positive. Within a day, the poll stood at 20 in favor and only 2 against. With this clear mandate, we moved forward.
  3. Rapid, Collaborative Implementation: I coordinated with the engineer from the Infrastructure team who had built the prototype. We ensured the new workflow was non-disruptive: developers could still get a full test run anytime by typing the /test command. We had the change fully implemented and ready for review the same day.

The Results: Immediate and Measurable Savings

The impact of this simple change, validated by data after three months of operation, was significant:

  • Drastic Reduction in Waste: We saw an 80% reduction in test runs on draft PRs (3,990 PRs without a command vs. 1,060 with one).
  • Verified Financial Savings: With a calculated cost of $1.95 per test run, this translated to immediate savings of over $2,600 USD per month, or an annualized saving of over $31,000 USD.
  • Improved Developer Productivity: The Jenkins queue became significantly less congested, saving hundreds of collective engineering hours per month that were previously lost to waiting. This directly translated to faster feedback loops and a more agile development cycle.

This project was a powerful demonstration of how a single, data-backed idea, when socialized effectively, can be implemented rapidly to deliver a massive, measurable return on investment by removing friction and eliminating waste.

Incident Command: Turning a Personal Mistake into Systemic Improvements

A retrospective on a SEV-2 incident where I took full ownership of a pre-launch misconfiguration for a critical MVP. This case study details the methodical response process and the key systemic improvements that resulted, turning a personal mistake into a valuable lesson for the entire engineering organization on the non-negotiable importance of observability.

Read the full retrospective ↓

The Challenge: A Pre-Launch SEV-2 Incident (Feb 2021)

Days before the planned launch of the company-defining Subscriptions MVP, we needed to conduct final Quality Assurance on our CRM email flows. This testing had to be done in the production environment to validate the integration with our email provider.

Due to a critical misunderstanding of our internal A/B testing framework's UI, I incorrectly configured the experiment. I believed I was targeting a small whitelist of test users, but I had inadvertently set the experiment live for 100% of eligible users.

The immediate impact was that thousands of customers were exposed to an incomplete, unlaunched feature. The partial experience consisted of incorrect copy promising auto-renewal and a different set of purchase plan sizes.

My Role: Incident Owner and Scribe

As the owner of the feature and the person who made the mistake, I took immediate and full responsibility. My role during the incident was twofold:

  • As the Incident Owner, I was responsible for coordinating the response, assessing the impact and driving the technical resolution.
  • As the designated Scribe, I was responsible for maintaining a clear, timestamped log of all actions and communications, ensuring we would have a precise record for the post-mortem.

The Response: A Methodical Approach Under Pressure

The incident went undetected for 16 hours overnight simply because we had no specific monitoring in place for this new flow. Once it was flagged the next morning, my response followed a clear hierarchy:

  1. Immediate Mitigation: The first action was to stop the user impact. I immediately disabled the experiment in our admin tool, which instantly reverted the experience to normal for all users and stopped the "bleeding."

  2. Diagnosing the Blast Radius: With the immediate crisis averted, I began the diagnosis myself. I queried our database's SubscriptionExperiment table and quickly identified that ~250 users had been incorrectly enrolled, far more than the handful of test accounts we expected.

  3. Resolution and Cleanup: I wrote and deployed a data migration script to correct the state for all affected accounts. This ensured that no user would be incorrectly billed or enrolled in a subscription and that our A/B test data for the upcoming launch would be clean.

The incident was fully resolved in under an hour from the time it was formally declared.

The Outcome: Systemic Improvement from a Personal Mistake

While we successfully corrected the immediate issue, the true value of this incident came from the blameless post-mortem process that I led. The 16-hour detection delay became the central exhibit for a crucial change in our engineering culture.

The post-mortem produced several critical, long-lasting improvements:

  • Improved Tooling: We filed and prioritized tickets to add clearer copy, UX warnings and a "confirmation" step to our internal experimentation framework to prevent this specific type of misconfiguration from ever happening again.
  • A New Engineering Rule: We established a new, mandatory process: any high-risk feature being tested in the production environment must have a dedicated monitoring dashboard built and active before the test begins.
  • A Foundational Personal Learning: I had personally made the trade-off to deprioritize the monitoring and observability tickets for the MVP to meet a tight deadline. This incident was a powerful, firsthand lesson that observability is not a "nice-to-have" feature; it is a core, non-negotiable requirement for any critical system. This principle has fundamentally shaped how I approach every project I've led since.

This incident, born from a personal mistake, became a catalyst for improving our tools, our processes and my own engineering philosophy.

Proactive Ownership: A UX Fix for a +27% GMV Lift

Identified a simple user experience mismatch on our mobile homepage, proactively proposed a low-effort A/B test to a different team and drove a +27% increase in Gross Merchandise Value (GMV) from the affected user segment. This case study is a testament to the power of looking beyond assigned tasks and thinking like an owner of the entire product.

Read the full retrospective ↓

The Challenge: A Simple Observation, A Big Opportunity

While working on a backend task, I was reviewing our platform's user flow and made a simple observation. Our homepage used the same 'hero image' for all users: a person on a laptop. While this was perfectly appropriate for desktop visitors, it struck me as a subtle but significant disconnect for a user visiting our site on their phone. My hypothesis was that this 'one-size-fits-all' approach was creating a subconscious barrier, making the product feel less relevant to mobile users and potentially harming conversion.

My Role: Proactive Contributor

This was a clear example of an opportunity that fell outside my direct responsibilities and team's domain. My role was not to implement a fix, but to act as a proactive owner of the overall product experience. This meant validating my observation with data and building a compelling, low-friction proposal for the team that actually owned the homepage.

The Solution: A Data-Backed, Easy-to-Say-Yes-To Proposal

I knew that simply flagging the issue in a Slack channel would likely result in it being lost in the backlog. To drive action, I followed a three-step process:

  1. Validate with Data: I partnered with a data analyst to confirm my hunch. We reviewed engagement metrics and confirmed that our mobile user segment indeed had a lower conversion rate compared to desktop users.
  2. Formulate a Hypothesis: I framed my observation as a clear, testable hypothesis: "Showing a mobile-centric image to mobile users will create better resonance, leading to increased engagement and conversion."
  3. Create a Low-Effort Proposal: I wrote a brief, one-page document for the relevant Product Manager. I didn't ask them to commit to a major roadmap change; I simply proposed a low-effort, high-potential A/B test. By doing the initial data validation and presenting a clear hypothesis, I made it as easy as possible for them to say "yes" and add it to their next sprint.

The Results: Outsized Impact from a Small Change

The homepage team was receptive and ran the A/B test. The results were immediate and exceeded all expectations. The mobile user cohort that saw a new, mobile-centric hero image demonstrated a massive improvement in key metrics:

  • +27% increase in Gross Merchandise Value (GMV)
  • +20% increase in total hours purchased

This initiative was a powerful lesson in the value of thinking like an owner. It proved that sometimes the most significant product improvements don't come from complex, multi-month engineering epics, but from a simple, user-centric observation and the initiative to see it through. It reinforced my belief that every member of a team has the ability to drive impact if they are empowered to look beyond their next ticket.

Strategic Alignment: A Lesson in "Disagree & Commit"

Identified a growing, underserved user segment (Music learners) experiencing significant friction due to product limitations. This case study details a data-backed advocacy effort to improve their experience, ultimately serving as a lesson in aligning individual insights with broader company strategy and the principle of "disagree and commit."

Read the full retrospective ↓

The Challenge: Overlooking Valuable Niche User Segments

Our platform's primary focus was language learning. However, I observed a significant and growing user base for other subjects, particularly Music. While these were not core growth areas, they consistently ranked in our top 10 most popular subjects by hours purchased. These subjects also had higher than average hourly rates, and represented substantial profitable revenue streams.

My concern was that these users faced unnecessary friction due to product limitations. For example:

  • Lack of Specialization Filters: Unlike languages (e.g. "Business English"), Music users couldn't search for "Guitar Tutor" or "Piano Tutor", forcing them to manually scroll through long lists and click into profiles.
  • Inflexible Subscription Frequencies: Many adult music learners preferred bi-weekly lessons due to practice time, but our subscription plans only offered weekly frequencies, pushing them towards less convenient options or even churn.

My hypothesis was that these overlooked user experience gaps were leading to unnecessary churn and hindering growth within these valuable niche segments.

My Role: Data-Driven Advocate

This initiative fell outside my direct team's roadmap. My role was to act as a data-driven advocate for these users, identifying the problem, quantifying the opportunity and proposing simple solutions to improve their experience.

The Solution: Building a Case for Niche Improvements

I approached this by building a clear, data-backed case:

  1. Quantify the Opportunity: I gathered data on the total hours and GMV generated by Music subject, demonstrating their substantial contribution to the company's bottom line, despite not being a primary growth focus.
  2. Highlight User Friction: I presented specific examples of user experience issues, backed by anecdotal feedback from customer support, illustrating how our generic platform was failing these specific users.
  3. Propose Low-Effort Solutions: I outlined simple backend changes (e.g. adding instrument specializations, allowing bi-weekly frequency options for specific subjects) that I believed could deliver high ROI by reducing friction and improving retention in these segments.

I presented this proposal to product leadership, emphasizing the potential for "easy wins" by serving an existing, valuable user base better.

The Outcome: Strategic Alignment and "Disagree and Commit"

While the product leadership acknowledged the validity of my insights and the value of these user segments, they made a strategic decision to maintain laser-focus on core language growth. Resources were explicitly allocated away from non-core subjects to maximize impact in the primary business area.

The proposed features were not prioritized in the roadmap.

This project, while not resulting in a shipped feature, provided me with a crucial lesson in "disagree and commit." I learned that:

  • Advocacy is Essential, but Strategy is King: It's vital to advocate fiercely for what you believe is right, especially when backed by data.
  • Respect Broader Strategic Alignment: Once a strategic decision is made, even if you disagree with it, it's essential to understand the rationale and align your efforts with the broader company goals.
  • Professional Conduct: I documented my research and proposal in our internal knowledge base, ensuring the insights were preserved for future consideration. I then refocused my full energy on the prioritized roadmap items.

This experience reinforced the importance of strategic clarity and demonstrated my ability to contribute insights, influence discussions and ultimately commit professionally to the agreed-upon direction.

System Design

Personal & Open Source Projects

Tinymem

A hardware-constrained memory game for the Thumby microcontroller, serving as the technical foundation for my PyDayBCN 2023 presentation. Written in under 50 lines of MicroPython, it demonstrates extreme resource-constrained development across audio, sprite rendering and I/O.

View on GitHub →

Rhythm Radar (2016)

An experimental real-time rhythm visualizer built with D3.js. This proof-of-concept uses polar coordinates to create an intuitive "radar" display for drum patterns, offering a novel alternative to traditional linear notation. The project was inspired by a desire to to understand complex rhythms in real time, especially when multiple instruments and loops are involved.

View on GitHub →

This Homepage

A custom-built, zero-framework static site generator powered by Node.js.

  • Architecture: Employs a headless Markdown approach with parallelized build tasks and per-page CSS loading for an exceptionally fast render path.
  • Discipline: Enforces strict Conventional Commits for an automated changelog and maintains WCAG-compliant accessibility and performance standards.

View on GitHub →

Pycrastinate (2014)

A language-agnostic tool for managing TODO and FIXME comments across entire codebases, presented in full at PyCon Sweden 2014 and abridged at EuroPython 2014.

  • Architecture: Built as a configurable pipeline, the tool finds tagged comments, enriches them with git blame metadata (author, date) and generates reports.
  • Purpose: This early project was an exploration into building developer tooling and demonstrates an early interest in code quality and maintainability.

View on GitHub →

Presentations

Education & Interests

M.Sc. from NTNU/UPC, fluent in English and active in community leadership.

See more

M.Sc. in Informatics Engineering (EQF 7), Highest Honours

Norwegian University of Science and Technology (NTNU) & Universitat Politècnica de Catalunya (UPC - BarcelonaTech), 2011

  • Completed a 5-year, 300+ ECTS program in Informatics Engineering at UPC, with specializations in Data Management and Software Engineering.
  • Authored a Master's Thesis on Test-Driven Conceptual Modelling at NTNU as part of the ERASMUS programme. This received the highest possible grade (A with Highest Honours) from both institutions.

Languages

  • Native: Catalan, Spanish
  • Full Proficiency: English (10+ years professional experience; Cambridge CPE/C2)
  • Conversational: Swedish (6 years residency; A2 certified)
  • Basic: Norwegian (EILC Bokmål course)

Leadership & Interests

  • Community Leadership (VP & Treasurer, 2006-2010): Co-managed a non-profit cultural association aimed at providing positive leisure alternatives for local youth. Oversaw budgets, secured public funding and organized community events, including annual LAN parties and boardgame weekends.
  • Personal Interests: I enjoy classical music, performing on piano, harpsichord and participating in organ masterclasses. I'm also an avid reader and fond of indie games, currently in the reviewing stage to publish a game in PlayDate's Catalog and previously developing PoCs for Thumby like a Simon clone and a multiplayer Pong clone.

Credentials

Degrees, diplomas and language certificates are available for review on GitHub.