<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Industrial Projects | Academic CV Pro Hugo Theme</title><link>https://dgullate.github.io/industry/</link><atom:link href="https://dgullate.github.io/industry/index.xml" rel="self" type="application/rss+xml"/><description>Industrial Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Tue, 25 Nov 2025 00:00:00 +0000</lastBuildDate><item><title>ATENEA for Aerospace Manufacturing</title><link>https://dgullate.github.io/industry/atenea-aerospace-manufacturing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/atenea-aerospace-manufacturing/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>ATENEA: systems based in artificial intelligence to support manufacturing engineering Contract Art. 83 between AIRBUS D&amp;amp;S and Universidad de Cádiz (CDTI Interconnecta) PI: David Gómez-Ullate (UCA), 01/04/2019 – 31/10/2019, Sum: 90.000 EUR.&lt;/p>
&lt;p>Modern aerospace manufacturing demands extremely high quality standards, especially in composite components where defects can be costly and difficult to detect. Within the context of Industry 4.0, ATENEA was a research and innovation project funded by CDTI and developed in collaboration with Airbus, with the goal of bringing data science and artificial intelligence directly into the production and inspection of fan cowls for the Airbus A320/A330 Neo.&lt;/p>
&lt;p>The project focused on transforming large volumes of heterogeneous industrial data—machine logs, sensor measurements, manufacturing records, and inspection images—into actionable insights. By integrating predictive analytics, computer vision, and real-time monitoring, ATENEA aimed to improve quality inspection, anticipate manufacturing deficiencies, and reduce non-conformities before they propagated through the production line.&lt;/p>
&lt;p>A key outcome of the project was the development of intelligent tools to support both process monitoring and quality assurance, enabling earlier detection of structural defects, better traceability, and more consistent inspection criteria. ATENEA demonstrates how advanced analytics can be embedded into real production environments to enhance reliability, efficiency, and decision-making in aerospace manufacturing.&lt;/p>
&lt;p>From a methodological perspective, the project combined predictive modeling and computer vision with industrial data pipelines. Machine-learning models were developed to estimate the probability of non-conformities related to delamination and porosity using data from composite layup machines, environmental sensors, tooling information, and SAP production records. In parallel, image-processing algorithms based on ultrasound inspection data were designed to automatically detect and segment potentially defective regions, producing an objective quality score for inspection reliability. These results were integrated into a real-time dashboard architecture, allowing continuous monitoring of production variables and inspection performance across the manufacturing process.&lt;/p>
&lt;p>This was a challenging Industry 4.0 collaboration between Airbus D&amp;amp;S and UCA Datalab at University of Cádiz. I led a team of 4 data scientists and software developers to complete our work package. We made frequent visits to the Airbus production plant at CBC - Puerto de Santa María to work in situ with Airbus personnel, deliver formation courses, become familiar with the whole Fan Cowl production process, etc. Unfortunately, confidentiality requirements due to the sensitive nature of the data and production process did not allow the publication of the results in scientific journals.&lt;/p></description></item><item><title>Climate Risk and the Future of Insurance</title><link>https://dgullate.github.io/industry/climate-risk-insurance/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/climate-risk-insurance/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Climate change is no longer a distant concern for insurers: it is already reshaping mortality patterns, life expectancy, and long-term risk. Rising temperatures, more frequent extremes, and uneven adaptation across populations pose fundamental challenges to how life and health insurance products are priced, managed, and regulated.&lt;/p>
&lt;p>This project, developed in collaboration between IE University and Vienna Insurance Group (VIG), translates cutting-edge climate and health research into tools that insurers can actually use. We study how temperature and other climate-related risk factors affect mortality across age groups, regions, and future climate scenarios, and we embed these effects directly into actuarial quantities such as death probabilities, life expectancy, and life tables.&lt;/p>
&lt;p>The outcome is a new generation of climate-adjusted life tables that allow insurers to explore how different climate pathways may impact portfolios over time. By combining public data, scientific projections, and proprietary information, the project supports more informed decisions around pricing, capital requirements, and long-term risk management in a changing climate.&lt;/p>
&lt;p>Methodologically, the project builds on exposure–response functions linking temperature to mortality, combined with high-resolution climate data and future climate scenarios (Shared Socioeconomic Pathways, SSPs). These relationships are propagated through an actuarial pipeline to produce climate-adjusted mortality rates and life tables by age, sex, and region. The framework integrates results from state-of-the-art epidemiological studies with demographic and actuarial modeling, enabling scenario-based projections of longevity and climate risk relevant for insurance portfolios.&lt;/p>
&lt;p>To learn more about the project, visit the &lt;a href="https://climateinsure.tech/" target="_blank" rel="noopener">project website&lt;/a>. We are organizing the 3rd edition of the Climate Change and Insurance conference series (&lt;a href="https://cci26.climateinsure.tech/" target="_blank" rel="noopener">CCI 26&lt;/a>) at IE University’s campus in Segovia, from 2-4 September 2026.&lt;/p></description></item><item><title>COVID-19 Impact of NPIs</title><link>https://dgullate.github.io/industry/covid-19-impact-of-npis/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/covid-19-impact-of-npis/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>The COVID-19 pandemics was a singular event where scientific activity proved to be instrumental in fighting against the disease and better decision making. Scientists worked round the clock from their homes during lockdown to establish networks, gather and process data, elaborate models and draft reports to help decision makers.&lt;/p>
&lt;p>In this context, the main mathematical spanish society CEMAT (Comité Español de Matemáticas) established a Committee of experts called “Acción Matemática contra el Coronavirus” from the 4 main societies (SEMA, RSME, SCM and SEIO) whose role was to elaborate a mathematical response to the challenges posed by the pandemics.&lt;/p>
&lt;p>The Committee elaborated a meta-prediction model where many modeling groups participated to predict short term prevalence and spread of the disease.&lt;/p>
&lt;p>The Committee started working with technical experts from the Ministry of Health and the Ministry of Economy, and it was tasked with modeling the effect of Non-Pharmaceutical Interventions (NPIs) both from a public health and an economic point of view. This was quite important since decision makers were often faced with the choice of adopting more restrictive measures with a considerable economic impact, whose effectiveness had to be predicted.&lt;/p>
&lt;p>Our study analyzed the effectiveness of non-pharmaceutical interventions (NPIs) implemented in Spain during the second wave of COVID-19 (September 2020 to May 2021). Researchers compiled detailed provincial and municipal data on restrictions across nine areas of activity and constructed a daily “restriction intensity index” ranging from 0 to 1 to quantify the strength of measures over time. Using statistical modeling under the framework of the Spanish Committee of Mathematics’ “Mathematical Action against Coronavirus” initiative, the team evaluated how changes in restriction intensity affected virus transmission.&lt;/p>
&lt;p>The results showed that increasing the overall intensity of measures by 34% was associated with a 22% reduction in transmission within one week. Interventions related to social distancing and indoor hospitality were found to be particularly effective, while measures affecting leisure, cultural activities, places of worship, religious celebrations, and indoor sports showed less clear effects—though these differences should be interpreted cautiously, as many measures were implemented simultaneously. The project also made all collected data publicly available to support transparency and future research, highlighting the critical role of mathematical modeling and data analysis in managing public health crises.&lt;/p>
&lt;p>My role in this project was mainly involved in coding and processing the NPIs into useful variables for the statistical model that matched the NPI intensity time series to incidence metrics. On a separate project, we made a predictive tool to assist hospitals in planning for extra beds in ICUs, leveraging what was known on disease dynamics and observed infected individuals.&lt;/p>
&lt;h3 id="media-coverage">Media Coverage&lt;/h3>
&lt;ul>
&lt;li>Interview in eldiario.es “To fight the pandemic, we need transparency and access to good data.” (17/04/20) &lt;a href="https://www.eldiario.es/sociedad/david-gomez-ullate-pandemia-necesitamos-transparencia_128_5871280.html" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;li>Interview for Real Sociedad Matemática Española (09/04/21) &lt;a href="https://rsme.es/david-gomez-ullate-no-se-puede-basar-en-el-voluntarismo-todo-el-trabajo-contra-el-coronavirus/" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;li>Las matemáticas frente a la Covid-19, Fundación Ramón Areces en colaboración con Real Sociedad Matemática Española &lt;a href="https://www.youtube.com/watch?v=ZZlDwg8cypw" target="_blank" rel="noopener">video&lt;/a>&lt;/li>
&lt;li>M. Salomone “Spanish mathematicians look for a model to predict how the pandemic will evolve”, Fundación BBVA (07/04/20) &lt;a href="https://www.bbva.com/en/spanish-mathematicians-look-for-a-model-to-predict-how-the-pandemic-will-evolve/" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;li>“Big Data contra el coronavirus y ¿nuestra privacidad?” , Fallo de Sistema, Radio 3 (19/04/20) &lt;a href="https://www.rtve.es/play/audios/fallo-de-sistema/fallo-sistema-401-big-data-contra-coronavirus-nuestra-privacidad-19-04-20/5560290/" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;li>“Desarrollan un modelo predictivo de ocupación de camas en las UCI de los hospitales andaluces” Fundación Descubre, Junta de Andalucía &lt;a href="https://fundaciondescubre.es/noticias/desarrollan-un-modelo-predictivo-de-ocupacion-de-camas-en-las-uci-de-los-hospitales-andaluces/" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;li>Acción Matemática contra la COVID confirma que el incremento de las restricciones redujo la transmisión del virus en un 22% a la semana. CITIC-UDC (17/04/23) &lt;a href="https://citic.udc.es/accion-matematica-contra-la-covid-confirma-que-el-incremento-de-las-restricciones-redujo-la-transmision-del-virus-en-un-22-a-la-semana/" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Fraud Detection in Electronic Payments</title><link>https://dgullate.github.io/industry/fraud-detection-payments/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/fraud-detection-payments/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>This research project applied advanced techniques from artificial intelligence (AI) and data science to the problem of detecting fraud in electronic payment systems, with a particular focus on credit card transactions.&lt;/p>
&lt;p>In those days, commercial AI-based tools were still in their infancy, and many anti-fraud systems were still a combination of rule based and very basic statistical filters. Our work involved analyzing over 150 million real transactions collected over one year by a first tier bank, to identify statistical traces and behavioural patterns associated with fraudulent activity. By leveraging AI-driven models, the research aimed to improve the ability of financial institutions to decide in real time whether a transaction should be blocked.&lt;/p>
&lt;p>The study tackled significant challenges typical of fraud detection—such as extremely imbalanced data (with approximately one fraudulent transaction per 6 000 legitimate ones), evolving fraud strategies that change over time, and the need to model the decision-making utility for both the bank and the fraudster. The project developed new algorithms that substantially improved model efficiency compared to existing approaches.&lt;/p>
&lt;p>In addition to its technical contributions, the project emphasised training early-stage researchers in statistical learning and data science, responding to strong market demand for these skills and the lack of formal academic programmes in this area at the time.&lt;/p>
&lt;p>Due to confidentiality clauses, we were unable to publish publicly available results on this research.&lt;/p>
&lt;h3 id="media-coverage">Media Coverage&lt;/h3>
&lt;ul>
&lt;li>“Matemáticas antirrobo y otras cuatro ideas para mejorar el mundo”, Diario EL País (30/07/2015)&lt;/li>
&lt;li>“La Fundación BBVA financia un proyecto basado en matemáticas que permitirá adelantarse al fraude bancario” ICMAT (29/07/2015) &lt;a href="https://www.icmat.es/es/actualidad/np-29-07-15/" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;/ul>
&lt;h3 id="projects-and-contracts">Projects and Contracts&lt;/h3>
&lt;ul>
&lt;li>Artificial Intelligence and Data Science: Applications in Payment Fraud Detection, Leonardo Scholarship, Fundación BBVA. &lt;a href="https://www.redleonardo.es/beneficiario/david-gomez-ullate-oteiza/" target="_blank" rel="noopener">link Red Leonardo&lt;/a>&lt;/li>
&lt;li>Sum: 40.000 EUR (2015-2016)&lt;/li>
&lt;/ul>
&lt;h3 id="contract-and-funding">Contract and Funding&lt;/h3>
&lt;ul>
&lt;li>Learning for fraud detection in electronic payments.&lt;/li>
&lt;li>Contract Art. 83 between Evendor Engineering SL and Univ. Complutense de Madrid.&lt;/li>
&lt;li>PI: David Gómez-Ullate (UCM-ICMAT), 01/06/2015 - 31/12/2015, Sum: 20.000 EUR.&lt;/li>
&lt;/ul></description></item><item><title>Green Navigation</title><link>https://dgullate.github.io/industry/transport/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/transport/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;h3 id="projects-and-contracts">Projects and Contracts&lt;/h3>
&lt;ul>
&lt;li>Mathematical optimization for a more efficient, safer and decarbonized maritime transport. Fundación BBVA - Matemáticas 2021, 01/09/2022 – 30/06/2025. PI: David Gómez-Ullate (IE University), Team: 23 researchers, Sum: 150.000 EUR.&lt;/li>
&lt;li>Optimización de rutas marítimas con información oceanográfica en tiempo real. AEI Transición Ecológica y Digital (TED2021-129455B-I00), 01/12/2022 – 30/11/2024. PI: David Gómez-Ullate (IE University), Team: 12 researchers, Sum: 116.380 EUR.&lt;/li>
&lt;/ul>
&lt;h3 id="presentations">Presentations&lt;/h3>
&lt;ul>
&lt;li>Mathematical optimization for a more efficient, safer and decarbonized maritime transport, Royal Academy of Sciences, Madrid, Jul 7, 2023 &lt;a href="https://www.youtube.com/watch?v=i6BFsdg-AVw&amp;amp;t=1709s" target="_blank" rel="noopener">video&lt;/a>.&lt;/li>
&lt;li>Smart Shipping: weather routing and WASP simulation, Session on Wind Propulsion at Sea Tech Week, Brest, France, Sep 26-30, 2022.&lt;/li>
&lt;li>Mathematical optimization of maritime shipping routes, IE-RSME Workshop on Applied Mathematics in Sustainability and Climate Change, IE Tower (Madrid), May 30, 2023 &lt;a href="https://ieconnects.ie.edu/iersmeworkshops/rsvp_boot?id=300088368" target="_blank" rel="noopener">link&lt;/a>.&lt;/li>
&lt;/ul>
&lt;h3 id="technical-overview">Technical Overview&lt;/h3>
&lt;p>Maritime transport accounts for more than 80% of global trade, yet it faces growing regulatory and environmental pressure to reduce fuel consumption and greenhouse gas emissions. At the same time, the increasing availability of high‑resolution oceanographic and meteorological data has opened new opportunities for smarter, data‑driven navigation.&lt;/p>
&lt;p>The Green Navigation project was born at this intersection: how can rigorous mathematical optimization transform real‑time weather and ocean data into safer, more efficient, and lower‑emission shipping routes?&lt;/p>
&lt;p>Funded by Fundación BBVA and the Spanish Ministry of Science, and developed in collaboration with Navantia and multiple academic partners, the project aimed to design a next‑generation weather routing system capable of integrating ocean currents, wind forecasts, fuel consumption models, vessel dynamics, and regulatory constraints into a unified optimization framework.&lt;/p>
&lt;p>The result is a fully operational routing platform that combines advanced mathematical theory with industrial‑grade software. The system computes optimized maritime routes based on departure date, vessel characteristics, cruise speed, and meteorological forecasts. It has been validated in real pilot studies with Boluda Shipping achieving average fuel savings of approximately 5-10% while maintaining operational constraints. These reductions translate directly into lower CO₂ emissions, improved economic performance, and enhanced navigational safety.&lt;/p>
&lt;p>From a methodological perspective, Green Navigation integrates multiple layers of applied mathematics, optimization, and computational science:&lt;/p>
&lt;ul>
&lt;li>Variational methods and geometric integration extend the classical Zermelo navigation problem using structure‑preserving discretizations with proven convergence properties. Parallel GPU implementations ensure scalability for large‑scale computations.&lt;/li>
&lt;li>Graph‑based optimization reformulates maritime routing as a time‑dependent shortest‑path problem. The HADAD algorithm (Hexagonal A‑Star with Differential refinement) introduces a hierarchical hexagonal spatial discretization, enabling multi‑scale exploration of the ocean surface while maintaining computational efficiency.&lt;/li>
&lt;li>Hybrid global–local search strategies combine fast heuristic exploration with variational refinement, ensuring both near‑global optimality and local numerical accuracy.&lt;/li>
&lt;li>Robust optimization under uncertainty incorporates ensemble weather forecasts and sensitivity analyses to quantify the impact of meteorological prediction errors on route performance.&lt;/li>
&lt;li>Data‑driven consumption and emissions models link vessel characteristics and sea‑state conditions to fuel consumption and pollutant emissions. Artificial neural networks trained on thousands of simulated hull geometries enable rapid prediction of hydrodynamic loads and seakeeping behavior with high accuracy.&lt;/li>
&lt;/ul>
&lt;p>A key contribution of the project is the creation of Weather Routing Bench 1.0, the first open benchmark for maritime weather routing. By providing standardized datasets, evaluation metrics, and reference implementations, this platform promotes reproducibility, objective comparison, and cumulative progress in a field that previously lacked a common evaluation framework.&lt;/p>
&lt;p>The technological maturity and industrial validation of the project ultimately led to the creation of a spin‑off company, &lt;a href="https://greenavigation.com" target="_blank" rel="noopener">Green Navigation&lt;/a>. The startup builds on the algorithms, software architecture, and optimization methodologies developed during the research project, translating them into scalable commercial solutions for the maritime sector.&lt;/p>
&lt;p>This transition from academic research to entrepreneurial initiative reflects the project’s dual ambition: advancing frontier mathematical research while delivering measurable environmental and economic impact in real shipping operations.&lt;/p></description></item><item><title>NeoCam</title><link>https://dgullate.github.io/industry/neocam/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/neocam/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Neocam was a beautiful project done in collaboration between many different members from UCA Datalab. It was originally motivated by the first &lt;a href="https://opencv.org/opencv-ai-competition-2021/" target="_blank" rel="noopener">OpenCV AI Competition in 2021&lt;/a>, which asked for projects based on the new edge computing camera Oak-D from Luxonis, which has 3 objectives (depth field) and a built-in chip that runs CV models on the device itself.&lt;/p>
&lt;p>Our project involved developing a complete edge-cloud platform for real-time monitoring of newborns in neonatal intensive care units. Premature babies require continuous observation, yet much clinically relevant information — such as motor activity, stress indicators, environmental conditions, and subtle behavioral cues — is either assessed subjectively or not captured systematically. By embedding intelligent devices directly at the incubator level, the project enables continuous collection and analysis of multimodal data (video, audio, vital signs, and environmental signals) while preserving privacy and minimizing intrusiveness. The goal is to transform raw, heterogeneous sensor streams into structured clinical information that supports more informed and timely medical decisions.&lt;/p>
&lt;p>Technically, the system combines optimized deep learning models deployed on edge devices with cloud-based infrastructure for secure storage, visualization, and integration into hospital workflows. In 3 months a team of 8 researchers developed NeoCam, a device able to measure the baby’s limbs motion, infer emotional status to detect pain and measure breathing rate in a contactless fashion. The whole Proof of Concept, which involved also a cloud platform and a mobile app, was deployed at the Neonate Intensive Care Unit at Hospital Puerta del Mar in Cádiz.&lt;/p>
&lt;p>The project won the first regional prize (Europe) in the 2021 OpenCV AI Competition and the &lt;a href="https://opencv.org/announcing-the-grand-prize-winners-of-opencv-ai-competition-2021/" target="_blank" rel="noopener">second global prize&lt;/a>, out of more than 1400 submitted projects. It also led to a publication in the high reputation journal IEEE Journal of Biomedical and Health Informatics.&lt;/p>
&lt;p>The PI of this project was my colleague Lionel Cervera. My role was to be responsible for the development of the three AI algorithms that were incorporated into the NeoCam.&lt;/p>
&lt;h3 id="media-coverage">Media Coverage&lt;/h3>
&lt;ul>
&lt;li>OpenCV Blog Episode 32 (04/11/21) Real-time tele-monitoring of preterm neonates with NeoCam - Interview with Satya Mallick, CEO of OpenCV. &lt;a href="https://www.youtube.com/watch?v=cQ-Q-4QGRCo" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;li>A UCA project wins first place in an international Artificial Intelligence competition, Diario de Cádiz (8/09/21) &lt;a href="https://www.diariodecadiz.es/noticias-provincia-cadiz/UCA-primer-puesto-competicion-internacional-Inteligencia-Artificial-video_0_1609040454.html" target="_blank" rel="noopener">link&lt;/a>&lt;/li>
&lt;/ul>
&lt;h3 id="videos">Videos&lt;/h3>
&lt;ul>
&lt;li>Spot: &lt;a href="https://www.youtube.com/watch?v=58KHGucW0dQ" target="_blank" rel="noopener">https://www.youtube.com/watch?v=58KHGucW0dQ&lt;/a>&lt;/li>
&lt;li>Motion detector: &lt;a href="https://www.youtube.com/watch?v=CGLl9O9GtEg" target="_blank" rel="noopener">https://www.youtube.com/watch?v=CGLl9O9GtEg&lt;/a>&lt;/li>
&lt;li>Breath rate detector: &lt;a href="https://www.youtube.com/watch?v=ZsHf2NaaHW8" target="_blank" rel="noopener">https://www.youtube.com/watch?v=ZsHf2NaaHW8&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Non-Intrusive Load Monitoring</title><link>https://dgullate.github.io/industry/nilm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/nilm/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Understanding how electricity is used inside homes is essential for improving energy efficiency, reducing emissions, and designing smarter power systems. Non-Intrusive Load Monitoring (NILM) aims to identify the activity of individual appliances, such as fridges, washing machines, or dishwashers, using only the total electricity consumption measured by a smart meter. By avoiding the need for dedicated sensors on each device, NILM enables scalable and privacy-preserving energy analytics. Reliable appliance-level information can support demand response, personalized energy feedback, fault detection, and more effective integration of renewable energy into the grid.&lt;/p>
&lt;p>This work addresses a fundamental but often overlooked modeling choice in NILM: how to define when an appliance is considered ON or OFF. Since real datasets usually provide power consumption but not appliance states, this requires introducing thresholding rules that transform a regression problem into a classification task. We show that different thresholding methods lead to substantially different learning problems and performance outcomes, even when using the same deep learning architectures. By systematically comparing thresholding strategies and proposing objective criteria based on signal reconstruction error, the paper highlights the importance of problem formulation in NILM. In addition, a multi-task learning approach is explored, showing that jointly learning appliance status and power consumption can improve performance for certain types of devices through transfer learning between regression and classification tasks.&lt;/p>
&lt;p>This was the first paper we wrote with Daniel Precioso when he joined UCA Datalab at University of Cádiz to start his industrial PhD. He was already interested in this topic because he had done an internship at Foqum analytics. We extended his work and addressed a relevant conceptual problem in how NILM problems are usually framed. The work was presented in a few conferences , among which:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://www.bcamath.org/es/bidas-4-fourth-bilbao-data-science-workshop-0" target="_blank" rel="noopener">BIDAS 4 - Fourth Bilbao Data Science Workshop&lt;/a>, BCAM (Nov 7-10 2019).&lt;/li>
&lt;li>&lt;a href="https://www.powertech2021.com/" target="_blank" rel="noopener">IEEE Powertech Madrid 2021&lt;/a>, (Jun 28-Jul 2 2021).&lt;/li>
&lt;/ul>
&lt;p>It also led to the publication of these papers:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://link.springer.com/article/10.1007/s11227-023-05149-8" target="_blank" rel="noopener">Thresholding Methods in Non-Intrusive Load Monitoring&lt;/a>. The Journal of Supercomputing 79 14039–14062 (2023).&lt;/li>
&lt;li>&lt;a href="https://ieeexplore.ieee.org/document/9494943" target="_blank" rel="noopener">Non-Intrusive Load Monitoring Using Multi-Output CNNs&lt;/a>. 2021 IEEE Madrid PowerTech, Madrid, Spain (2021).&lt;/li>
&lt;/ul></description></item><item><title>Predicting Drifting Buoys</title><link>https://dgullate.github.io/industry/predicting-drifting-buoys/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/predicting-drifting-buoys/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Satlink is one of the largest world manufacturers of smart buoys for the tuna fishing industry. They approached me while I was working at University of Cádiz with an initial project that involved predicting the drifting trajectory of a FAD (Fish Aggregating Device) to which their buoys are attached. Besides other sensors, these smart buoys contain an echosounder that measures biomass and detect tuna presence, and they send this information every hour to a satellite. The buoys are deployed by ships that drop them in the ocean, and once released they drift with the ocean currents. The company found that, despite having an operation life of 6-12 months, many buoys were lost within 2-3 weeks by collision with the coast. They wanted to identify the best spots to drop the buoys so that the drift would keep them circulating in the ocean.&lt;/p>
&lt;p>Satlink provided us with a very valuable dataset: the daily positions of more than 40.000 drifting buoys in the Indian Ocean.&lt;/p>
&lt;p>We developed a Lagrangian numerical integration scheme which used the ocean currents predictions from an European provider (Copernicus) and an American one (HYCOM).&lt;/p>
&lt;p>As a side project, we used their data to validate predictions from these models, comparing the true trajectories of the buoys with those predicted by the Lagrangian model. This is an example of a synergistic and positive by-product between industry and research. The largest publicly funded program to gather data from drifting buoys is the NOAA &lt;a href="https://www.aoml.noaa.gov/global-drifter-program/" target="_blank" rel="noopener">Global Drifter Program&lt;/a>, which contains around 1000 buoys from a collaboration between 19 countries. By way of contrast, we had access to a proprietary dataset of more than 20 years of operation for ca. 40.000 buoys in the three major oceans, providing daily (sometimes hourly) positions. Of course these buoys were deployed for fishing purposes, but the data they gathered can be used for many other scientific purposes, in this case, validation of Ocean General Circulation Models.&lt;/p>
&lt;p>My role in this project was Principal Investigator and responsible for the contract. It was a good opportunity to work with Karan Bedi, a visiting MSc student from IIT Roorkee (India), who visited UCA Datalab in Cádiz during that period.&lt;/p>
&lt;h3 id="projects-and-contracts">Projects and Contracts&lt;/h3>
&lt;ul>
&lt;li>Prediction of drifting objects in the ocean. Contract Art. 83 between Satlink SL and Universidad de Cádiz. PI: David Gómez-Ullate (UCA), 01/04/2019 – 01/07/2019, Sum: 70.000 EUR.&lt;/li>
&lt;/ul></description></item><item><title>Smart Advertising Decisions</title><link>https://dgullate.github.io/industry/marketing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/marketing/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Machine learning for precision marketing.&lt;/p>
&lt;h3 id="projects-and-contracts">Projects and Contracts&lt;/h3>
&lt;ul>
&lt;li>Contract Art. 83 between Omnicom Media Group S.A. and Univ. Complutense de Madrid. PI: David Gómez-Ullate (UCM-ICMAT), 01/12/2016 - 30/11/2017, Sum: 27.600 EUR.&lt;/li>
&lt;li>AI on geodata applied to conversion and CTR prediction in precision marketing. Contract Art. 83 between Omnicom Media Group S.A. and Univ. Complutense de Madrid. PI: David Gómez-Ullate (UCM-ICMAT), 01/12/2017 - 30/09/2018, Sum: 20.400 EUR.&lt;/li>
&lt;/ul>
&lt;p>One of the central problems in marketing and advertising is to answer the following question:&lt;/p>
&lt;p>How much do advertising campaigns actually increase sales? And which channels are worth the investment?&lt;/p>
&lt;p>In this project, we study real sales and advertising data from a large fast-food franchise to understand how different advertising channels (TV, online, radio, outdoor, etc.) influence weekly sales. Instead of relying on simple correlations, we use a data-driven approach that separates long-term trends, seasonal effects (such as holidays), and external factors like weather or major events.&lt;/p>
&lt;p>The result is a practical decision-support tool that helps managers answer questions such as “Where should I spend my advertising budget next week?” and “What trade-off am I making between expected sales and risk?” The model not only forecasts future sales accurately, but also provides clear guidance on how to allocate advertising budgets more effectively across channels.&lt;/p>
&lt;p>From a methodological perspective, the analysis is based on a Bayesian dynamic linear (state-space) model that builds on the classic Nerlove–Arrow framework for advertising response. This approach models advertising as a stock with delayed and decaying effects over time, while naturally incorporating trends, seasonality, and uncertainty. The Bayesian formulation allows prior information to be included and provides full predictive distributions, making it especially well suited for forecasting and risk-aware budget allocation.&lt;/p>
&lt;p>This work shows how modern statistical methods can turn complex business data into actionable insights for real-world decision making. In addition, we explored how different advertising plans can be compared using multi-objective optimization, highlighting trade-offs between expected sales, risk, and innovation.&lt;/p>
&lt;p>This project was an Art.83 collaboration between my group at Universidad Complutense and Omnicom Media Group. Besides providing an actionable model for the advertising company, it led to a couple of academic publications:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://onlinelibrary.wiley.com/doi/10.1002/asmb.2460" target="_blank" rel="noopener">Assessing the effect of advertising expenditures upon sales:A Bayesian structural time series model&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://link.springer.com/chapter/10.1007/978-3-319-89824-7_9" target="_blank" rel="noopener">Bayesian Factorization Machines for Risk Management and Robust Decision Making&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Smart Retrieval and Structuring of Legal Documents</title><link>https://dgullate.github.io/industry/legal-document-retrieval/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/legal-document-retrieval/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Natural Language Processing with Deep Learning for retrieval of legal documents&lt;/p>
&lt;p>This project was developed in the early days of Deep Learning NLP, before the transformer architecture was built into commercial products.&lt;/p>
&lt;p>In 2018, Lefebvre (Spain’s leading provider of legal information) sought to modernize the way it managed judicial content. The goal was to automatically classify, label, and extract relevant information from a corpus of over one million court rulings and legal documents — a repository that continues to grow daily.&lt;/p>
&lt;p>Beyond structuring the data, the project aimed to enable advanced search capabilities that would support faster and more informed legal decision-making.&lt;/p>
&lt;p>The project began with entirely unlabeled data. Due to the highly specialized nature of legal language — particularly within the Spanish legal system — off-the-shelf pre-trained NLP models were not suitable.&lt;/p>
&lt;p>Addressing this required the development of a dedicated annotation pipeline, the design of domain-specific labeling strategies, and the implementation of an active learning framework to efficiently guide expert annotation. At the time (2018), this meant deploying state-of-the-art NLP methodologies adapted specifically to the legal domain.&lt;/p>
&lt;p>The project delivered a hierarchical classification system capable of organizing judgments and legal documents across multiple levels of legal categories, achieving accuracy rates above 90%. In addition, the implementation of semantic search capabilities improved information retrieval performance. Compared to the previous system, the new solution was 22 times more efficient, significantly reducing operational workload and increasing productivity and service responsiveness.&lt;/p>
&lt;p>Due to confidentiality agreements, the team could not write a publication in this project, but the results were explained in the specialized conference &lt;a href="https://jurix.oeg.fi.upm.es/iberlegal.html" target="_blank" rel="noopener">JURIX 2019 - IberLegal&lt;/a> with a &lt;a href="https://jurix.oeg.fi.upm.es/industryprogram.html" target="_blank" rel="noopener">talk in the industry session&lt;/a>.&lt;/p>
&lt;p>I was Principal Investigator of the project and responsible for delivery of results under the contract between UCA Datalab and Quantum Analytics.&lt;/p>
&lt;p>Contract Art. 83 between Quantum Analytics and Universidad de Cádiz&lt;/p>
&lt;p>PI: David Gómez-Ullate (UCA), 02/07/2018 – 01/07/2019, Sum: 72.600 EUR.&lt;/p></description></item><item><title>SNOMED-CT Coding for Pathology Reports</title><link>https://dgullate.github.io/industry/snomed-ct-coding/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/snomed-ct-coding/</guid><description>&lt;h2 id="overview">Overview&lt;/h2></description></item><item><title>Tun-AI</title><link>https://dgullate.github.io/industry/tun-ai/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://dgullate.github.io/industry/tun-ai/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>After successful delivery of the project on dFAD drift prediction, Satlink wanted to develop an AI algorithm to automate and improve the conversion of raw echosounder signal to presence of tuna, which lies at the core of their business. Before, this information with complex spatio-temporal patterns had to be often interpreted by an expert biologist, and often had strong bias, besides being expensive and lacking scalability to a fleet of thousands of vessels. As supervised signal, we received data from 20 years of tuna fishing captures from OPAGAC, and we harvested echosounder data and oceanographic variables on the days prior to the capture as predictor variables to build the first tuna presence detector to be deployed at commercial scale by Satlink.&lt;/p>
&lt;p>The Tun-AI project develops an AI-driven pipeline to transform raw data from echosounder buoys attached to drifting Fish Aggregating Devices (dFADs) into meaningful insights about tuna presence and biomass at sea. These buoys — widely deployed across tropical oceans as part of tuna purse-seine fisheries — continuously transmit location and acoustic backscatter information that can indicate the presence of fish under the buoy. By combining buoy data with satellite-derived oceanographic information (e.g., temperature, currents, chlorophyll) and historical catch records, the Tun-AI system uses machine learning models to estimate tuna biomass with high accuracy, effectively turning industrial monitoring infrastructure into a global biological sensor network.&lt;/p>
&lt;p>Beyond improving biomass estimation for fishery operations, Tun-AI opens new avenues for scientific research into tuna behaviour and ecology. Traditional ecological studies of highly migratory species like tuna are costly and geographically limited; by harnessing data from thousands of buoys over long time periods, the project offers a cost-effective way to observe patterns of tuna aggregation and movement on an ocean scale. This collaborative effort between industry (buoy manufacturers and fleets) and researchers demonstrates how AI can add value to existing maritime technologies and support more sustainable, data-informed fishery management.&lt;/p>
&lt;p>The project allowed Stalink to automate and serve predictions in real time with higher accuracy than the human predictions in use before the project. The AI model is currently in use by more than 1500 fishing vessels. My role in the project was principal investigator and responsible for the contract, which was executed by Komorebi AI, where I supervised a team of 4 data scientists who built, evaluated and deployed the model in production.&lt;/p>
&lt;h3 id="media-coverage">Media Coverage&lt;/h3>
&lt;ul>
&lt;li>Tun-AI: Using echosounder buoy technology to study tuna behaviour at sea, Research Features &lt;a href="https://researchfeatures.com/tun-ai-echosounder-buoy-technology-study-tuna-behaviour-sea/" target="_blank" rel="noopener">Issue 149&lt;/a> (22/09/23)&lt;/li>
&lt;/ul></description></item></channel></rss>