1. About

I am a PhD student in the School of Computer and Communication Sciences at EPFL, advised by Pascal Frossard in the Signal Processing Laboratory 4 (LTS4). My research focuses on model editing, multimodal learning, and the robustness and safety of modern vision and language models.:contentReference[oaicite:0]{index=0}:contentReference[oaicite:1]{index=1}

Before joining EPFL, I obtained my Bachelor of Engineering in Computer Science from the American University of Beirut (AUB), where I worked on deep learning for satellite imagery and spent a research internship at MIT’s MadryLab.:contentReference[oaicite:2]{index=2}

2. Research

Broadly, I am interested in reliable and efficient foundation models, with an emphasis on:

  • Editing and adapting pre-trained models without full retraining.
  • Vision–language modeling and multimodal representations.
  • Robustness, safety, and evaluation of deep learning systems.:contentReference[oaicite:4]{index=4}

My recent work looks at semantic document derendering, weight arithmetic for model editing, and understanding how large models behave when deployed in education and other high-stakes applications.:contentReference[oaicite:5]{index=5}

3. Selected publications

* indicates equal contribution. For a complete and up-to-date list, see Google Scholar.:contentReference[oaicite:6]{index=6}

  • Semantic Document Derendering: SVG Reconstruction via Vision-Language Modeling
    Adam Hazimeh, Ke Wang, Mark Collier, Gilles Baechler, Efi Kokiopoulou, Pascal Frossard
    AAAI Conference on Artificial Intelligence (AAAI), 2026 (to appear)
    arXiv .:contentReference[oaicite:7]{index=7}
  • Model Soups Need Only One Ingredient
    A. Abdollahpoorrostam, N. Dimitriadis, A. Hazimeh, P. Frossard
    ICLR 2026 (under review)
    OpenReview .:contentReference[oaicite:8]{index=8}:contentReference[oaicite:9]{index=9}
  • Task Addition and Weight Disentanglement in Closed-Vocabulary Models
    A. Hazimeh*, A. Favero*, P. Frossard
    Efficient Systems for Foundation Models II (ES-FoMo II), ICML 2024; extended version on arXiv, 2025
    arXiv .:contentReference[oaicite:10]{index=10}:contentReference[oaicite:11]{index=11}
  • Towards Modeling Learner Performance with Large Language Models
    S. P. Neshaei, R. L. Davis, A. Hazimeh, B. Lazarevski, P. Dillenbourg, T. Käser
    International Conference on Educational Data Mining (EDM), 2024
    Code .:contentReference[oaicite:12]{index=12}:contentReference[oaicite:13]{index=13}

4. Teaching

EPFL

  • Data Visualization (Teaching Assistant, Spring 2025, Spring 2024)
  • Introductory Physics (Teaching Assistant, Fall 2024)

American University of Beirut (AUB)

  • Introduction to Computation and Programming (Teaching Assistant, Spring 2021)
:contentReference[oaicite:14]{index=14}

5. Honors & Awards

  • EDIC Fellowship, EPFL – fellowship for first-year PhD students in computer and communication sciences.
  • Dean’s Honor List, AUB – awarded every semester for high academic standing.
:contentReference[oaicite:15]{index=15}

6. Curriculum Vitae

A concise academic CV is often useful for applications and collaborations. You can find my most recent CV here:

Download CV (PDF)

If you update the CV file name in your repository, remember to update this link as well.

7. Contact

The best way to reach me is by email:

Email:

Affiliation: Signal Processing Laboratory 4 (LTS4), EPFL, Lausanne, Switzerland

Links: Google Scholar · GitHub · LinkedIn · ORCID · LTS4

:contentReference[oaicite:16]{index=16}

For privacy, I do not list a phone number here; please contact me by email for calls or meetings.