See How Visible Your Brand is in AI Search Get Free Report

My Tested Guide to OpenAI’s GPT-OSS 20B vs 120B Models

  • Senior Writer
  • August 29, 2025
    Updated
my-tested-guide-to-openais-gpt-oss-20b-vs-120b-models
Turns out AI has trust issues with us. A recent industry survey says 44% of organizations worry about AI transparency and explainability. If they cannot see how it thinks, they do not want it to make decisions.

OpenAI has addressed that concern with its new GPT-OSS Models, offering the 20B and 120B open-weight variants. Each model reveals its full reasoning process, allowing you to review, verify, and fine-tune outputs with complete transparency.

In this guide, you’ll get a complete overview of OpenAI’s GPT-OSS models, including what they are, where to get them, and how to use them, along with my hands-on tests of the 20B and 120B versions to see how they really perform.

🔍 Summarize this Article with:

💡 ChatGPT | 💡 Perplexity | 💡 Claude | 💡 Google AI | 💡 Grok

📌 Executive Summary

Overview: GPT-OSS models are open-source LLMs with full transparency, available under the Apache 2.0 license
Variants: Two sizes, GPT-OSS 20B for consumer-level and GPT-OSS 120B for enterprise-level
Performance: 20B optimized for speed and 120B for deeper reasoning using Mixture-of-Experts architecture
Access: Download freely via Ollama, Hugging Face, and GitHub
Use Cases: Ideal for compliance-focused AI, local deployments, R&D, and domain-specific fine-tuning
Licensing: Apache 2.0 allows unrestricted commercial use and modification 


OpenAI’s GPT-OSS Models: Quick Navigation

Jump directly to the sections you need from this guide on OpenAI’s GPT-OSS Models.


What Are GPT-OSS Models? Complete Overview

GPT-OSS models are OpenAI’s open-weight, text-only reasoning systems released under the Apache 2.0 license. They can be downloaded, used, modified, fine-tuned, and shared freely for both commercial and non-commercial projects.

Built for transparency and flexibility, GPT-OSS Models let you run powerful reasoning systems locally or within fully controlled environments, making them ideal for projects that require both performance and control.

Key Features:

  • Open-source: Apache 2.0 license allows free commercial use
  • Transparent reasoning: Full visibility into model decision-making
  • Two sizes: 20B (consumer hardware) and 120B (enterprise hardware)
  • Tool integration: Supports web browsing and Python code execution
Released January 2025, downloadable via Ollama, Hugging Face, or GitHub

Performance Comparison of GPT-OSS Models

Here’s a side-by-side comparison of the key performance metrics for the GPT-OSS 20B and 120B models:

Feature GPT-OSS 20B GPT-OSS 120B
Architecture & Size 24 layers, 20.9B total params, 3.6B active params 36 layers, 116.8B total params, 5.1B active params per token per forward pass
Mixture-of-Experts Config 32 experts, top-4 routing, SwiGLU activation, Grouped Query Attention with rotary embeddings, 131k token context length 128 experts, top-4 routing, SwiGLU activation, Grouped Query Attention with rotary embeddings, 131k token context length
Quantization MXFP4 quantization, runs on 16GB VRAM MXFP4 quantization, fits on 80GB GPU
Performance Competitive with larger models in math & coding despite being ~6× smaller Approaches OpenAI o4-mini in reasoning, coding, tool use, and health benchmarks
Parameters 20.9 billion 116.8 billion
VRAM Requirement 16 GB 80 GB
Speed Faster responses, quick load Requires powerful GPUs, slower load
Reasoning Depth Good Deeper, complex reasoning
Accuracy High on math & reasoning High on math & reasoning

How to Get Started with GPT-OSS Models?

To begin using GPT-OSS models, you first need to choose the best platform to download and run them based on your needs and technical setup.

Download Options

  1. Ollama is the easiest way to install and manage the models locally.
  2. Hugging Face offers direct model downloads with full documentation and support through the Transformers library.
  3. GitHub provides source code, model weights, and detailed technical instructions for advanced customization and setup.

System Requirements

  • The GPT-OSS 20B model works well on machines with at least 16 gigabytes of VRAM and is suitable for consumer-grade GPUs.
  • The GPT-OSS 120B model requires around 80 gigabytes of VRAM and is recommended to run on enterprise-grade GPUs such as the NVIDIA H100.
  • You will need at least 40 gigabytes of free disk space to store the model files.
  • The models support Windows, Linux, and macOS operating systems.

Quick Setup Guide

  1. Install Ollama by running their installation script from the official website.
  2. Download the GPT-OSS 20B model using the Ollama pull command.
  3. Run the GPT-OSS 20B model locally through Ollama to start using it.

Licensing and Use Cases

  • GPT-OSS models are released under the Apache 2.0 license.
  • This license allows free commercial and research use without restrictions.
  • Ideal for projects requiring transparent AI solutions.
  • Perfect for compliance-focused applications.
  • Suitable for use cases needing full control and auditability of AI reasoning.
OpenAI released GPT-OSS models to promote transparency and support the open-source AI movement by providing open weights that foster trust and collaboration.

GPT-OSS models offer open weights and transparent chain-of-thought reasoning, allowing users to inspect, verify, and modify the models unlike closed proprietary versions.

The models are aimed at researchers, AI developers, and compliance-focused teams needing transparency, control, and the ability to run AI locally or audit decisions.

GPT-OSS Models Architecture: How They’re Built

GPT-OSS is built as an autoregressive Mixture-of-Experts transformer that combines sparse expert routing with dense transformer blocks to deliver efficient, transparent agentic reasoning. Here is how GPT-OSS is built:

how-is-gpt-oss-built

  • Base Architecture – Autoregressive Mixture-of-Experts transformers building on GPT-2/GPT-3 designs with Pre-LN placement and RMSNorm.
  • Tokenizer – o200k_harmony Byte Pair Encoding with 201,088 tokens, optimized for harmony chat format.
  • Pre-Training – Trillions of tokens with emphasis on STEM, coding, general knowledge; filtered for harmful content; trained on NVIDIA H100 GPUs with Flash Attention and Triton kernels.
  • Post-Training – Chain-of-Thought reinforcement learning, Harmony Chat Format for structured role-based instructions, Variable Effort Reasoning (low, medium, high), and Agentic Tool Use (browsing, Python, function calling).
  • Evaluation Methodology – Benchmarked on reasoning, factuality, coding, tool use, health, and multilingual capabilities, with comparisons to OpenAI o3, o3-mini, and o4-mini. Source  

Key Features of GPT-OSS Models: Why They Stand Out

Here are the main features that make GPT OSS models powerful and flexible. They offer open licensing, strong task-solving abilities, deep customization options, and clear, transparent reasoning to help you build reliable AI solutions.

what-are-the-key-features-of-open-ai-oss-models

  • Permissive License: GPT OSS models come with the Apache 2.0 license. This means you can freely build, customize, and deploy them without worrying about copyleft rules or patent risks, whether for experiments or commercial use.
  • Designed for Agentic Tasks: These models excel at following instructions and using tools like web search and Python code execution within their reasoning process. This makes them ideal for complex, agent-like tasks.
  • Deep Customization: You can adjust how much reasoning effort the model uses at low, medium, or high levels. Plus, full-parameter fine-tuning lets you tailor the model to fit your specific needs perfectly.
  • Full Chain-of-Thought Access: GPT OSS models provide complete step-by-step reasoning. This transparency makes debugging easier and builds greater trust in the answers they produce.

How Do GPT-OSS Models Compare with Other OpenAI Models?

GPT-OSS lineup includes the 20B and 120B models, both released under the Apache 2.0 license with full transparency in reasoning. Here’s how they compare to other popular OpenAI models:

Model License Reasoning Visibility Context Length Hardware Needs Strengths Weaknesses
GPT-OSS 20B Apache 2.0 ✅ Full chain-of-thought 131k tokens 16GB VRAM Fast, transparent, runs locally Less deep reasoning than 120B
GPT-OSS 120B Apache 2.0 ✅ Full chain-of-thought 131k tokens 80GB VRAM Powerful, deep reasoning Requires high-end GPUs
GPT-4 / GPT-4 Turbo Proprietary ❌ No reasoning visibility 128k+ tokens Cloud only Highly accurate, multimodal Closed-source, no transparency
GPT-3.5 Proprietary ❌ No reasoning visibility 4k tokens Cloud only Widely used, reliable Closed-source, limited context

Why GPT-OSS is Different

  • Full transparency – Reveals its reasoning steps completely, enabling auditable AI workflows.
  • Permissive license – Apache 2.0 allows unrestricted commercial use, unlike Meta’s more restrictive terms.
  • Tool-use ready – Supports browsing, Python execution, and structured instructions, functioning more like a full AI agent than a text generator.
Bottom line: If you need full transparency in AI reasoning, GPT-OSS is unmatched among open or semi-open models. For accuracy without visible reasoning, LLaMA or Claude offer better efficiency and more mature ecosystems.

Behind the Scenes: Getting Started with GPT-OSS Models 😢😢😢

At AllAboutAI, I’m no stranger to testing different LLMs and AI agents but trust me, tackling GPT-OSS models was a whole new level of sweat and frustration. I found three main ways to access them: downloading Ollama, signing in to Hugging Face, or using GitHub.

I went with Ollama because it’s popular and makes running both the 20B and 120B models a bit easier well, as easy as it gets!

Here’s the catch: hardware struggles are real. The 20B model runs on a decent consumer laptop but still needs a ton of storage. The 120B? Let’s just say my laptop nearly begged for mercy, threw a tantrum, and needed a serious upgrade.

After shutting down everything, clearing caches, and having a long talk with my computer about handling big workloads, I finally got things running. So if you’re ready to play with these models, prepare to pamper your PC like it’s your new best friend.


My Testing Methodology for Evaluating GPT-OSS Models

To ensure a fair and meaningful comparison between GPT-OSS 20B and GPT-OSS 120B, I designed a structured testing process that mirrors real-world use cases for transparent agent reasoning.

The goal was not just to see if the models could “answer questions,” but to measure how they reason, explain, and adapt across different challenges.

Hardware & Environment

  • System for GPT-OSS 20B: Mid-range Windows PC with consumer-grade GPU (suitable for smooth operation).
  • System for GPT-OSS 120B: High-performance setup with an NVIDIA H100 (80GB VRAM) to handle the much larger model size and higher memory demands.
  • Both models were run through Ollama for local execution, ensuring no internet inference and consistent conditions.

Test Categories

I ran the same set of tests on both models to allow direct comparison:

  • Basic Math Reasoning: Simple arithmetic problems requiring transparent step-by-step calculation (e.g., calculating speed).
  • Multi-Step Algebraic Reasoning: Word problems translated into algebraic equations with stepwise solution (e.g., age-related problems).
  • Factual Knowledge Recall: Questions requiring accurate retrieval of well-known facts (tested only on GPT-OSS 120B).
  • Scientific Explanation: Simplifying complex scientific processes with clear reasoning (tested only on GPT-OSS 120B).

Evaluation Criteria

For every test, I recorded:

  • Accuracy: Was the answer correct?
  • Clarity of Reasoning: Were the steps logical and easy to follow?
  • Latency: Time taken to generate a complete response.
  • Depth: Did the answer cover multiple angles or perspectives?
  • Transparency: Could a non-expert understand how the conclusion was reached?


What Types of Tests Did I Use to Evaluate GPT-OSS Models?

To evaluate these models, I tested them on specific queries that cover basic arithmetic, algebraic reasoning, factual knowledge, and scientific explanation. Each was chosen to assess both answer accuracy and clarity of reasoning.

What Type of Query Was Tested? What Sample Query Was Used? What Was the Purpose of the Test? Which Model Was Tested?
Basic Math “If a train travels 60 miles in 1.5 hours, what is its speed in mph? Please explain your reasoning step-by-step.” Assess arithmetic skills and transparent stepwise reasoning GPT-OSS 20B
Multi-Step Algebraic Reasoning “A mother is three times as old as her son. In 5 years, she will be twice as old as him. How old are they now? Please explain step-by-step.” Evaluate ability to solve and explain multi-step algebraic problems GPT-OSS 20B
Factual Knowledge “Who was the first president of the United States? Please explain briefly.” Test factual accuracy and concise knowledge retrieval GPT-OSS 120B
Scientific Explanation “Explain how photosynthesis works in simple terms, showing your reasoning step-by-step.” Test capability to simplify and clearly explain complex scientific processes GPT-OSS 120B

How Did OpenAI’s GPT-OSS Models Respond to My Test Queries?

Let’s have a look at how GPT-OSS 20B and 120B models handled the questions I threw at them. I chose queries that reflect real-world tasks, focusing on reasoning, factual accuracy, and clarity of explanation.

  1. GPT-OSS 20B
  2. GPT-OSS 120B

GPT-OSS 20B Clear and Accessible Reasoning for Everyday Problems

GPT-OSS 20B is designed to run efficiently on more accessible hardware. To evaluate its strengths, I focused on everyday reasoning questions that test how well it explains its thought process clearly and transparently.

Basic Math Reasoning

I asked: “If a train travels 60 miles in 1.5 hours, what is its speed in miles per hour? Please explain your reasoning step-by-step.”

The model responded with a clear, step-by-step explanation. It calculated speed by dividing distance by time, showing the arithmetic process leading to 40 miles per hour.

This demonstrates GPT-OSS 20B’s ability to deliver simple calculations with transparent reasoning.

gpt-oss-model-20b-resposne

Reasoning Snapshot — GPT-OSS 20B

Test Type Steps Given Detail Level Accuracy Style
Basic Math 5 Medium ✅ Accurate Clear, beginner-friendly
Algebra 7 Medium-High ✅ Accurate Methodical and contextual

What I observed for GPT-OSS 20B: The 20B handled most reasoning, factual accuracy, and clarity tasks well, though not as fast or deep as 120B. It’s lighter on hardware, making it more practical for users without high-end GPUs.


Multi-Step Algebraic Reasoning

For a deeper test, I gave GPT-OSS 20B a classic age-related algebra problem: “A mother is three times as old as her son. In 5 years, she will be twice as old as him. How old are they now? Please explain step-by-step.”

The model produced a thorough and logical stepwise solution. It converted the word problem into equations, solved for the ages, checked the answer, and even noted the real-world plausibility of the results.

This shows the model’s strength in handling complex reasoning with clarity and context.

These scenarios together provide a well-rounded test of GPT-OSS 20B’s ability to explain reasoning clearly across basic arithmetic, proportional logic, and algebraic problem solving.


GPT-OSS 120B In-Depth Knowledge and Nuanced Scientific Understanding

GPT-OSS 120B is a larger, more capable model designed to handle complex reasoning, detailed explanations, and factual knowledge with nuance. To evaluate its strengths, I tested it with queries that demand multi-step reasoning and accurate domain knowledge.

Below are two key examples showcasing its performance.

Factual Knowledge Question

To check the factual knowledge of GPT-OSS 120B, I asked: “Who was the first president of the United States? Please explain briefly.”

As you can see below, the model carefully parsed the question and provided a clear, accurate, and concise answer about George Washington, including important historical context.

This shows 120B’s ability to retrieve reliable historical facts quickly and present them understandably.

Reasoning Snapshot — GPT-OSS 120B

Test Type Steps Given Detail Level Accuracy Style
Factual Knowledge 4 Medium ✅ Accurate Concise, contextual
Scientific Explanation 9 High ✅ Accurate Detailed, accessible

What I observed for GPT-OSS 120B:
GPT-OSS 120B consistently delivered deeper, more nuanced reasoning than 20B, excelling across all evaluation criteria. Hardware demand is heavy, think high-end workstation or data center GPUs like the H100.


Scientific Explanation with Reasoning

I also tested GPT-OSS 120B’s ability to explain complex scientific topics by asking: “Explain how photosynthesis works in simple terms showing your reasoning step-by-step.”

As shown below, the model offered a detailed yet simple explanation of photosynthesis, describing how sunlight is captured by chlorophyll to convert carbon dioxide and water into glucose and oxygen.

This demonstrates 120B’s strength in breaking down complicated concepts into easy-to-follow reasoning.

Which GPT-OSS model do you find most suitable for your needs?


How Do GPT-OSS 20B and 120B Compare in Performance, Usability, and Challenges?

Here’s a quick comparison to help you understand how GPT-OSS 20B and 120B differ in key areas like performance, usability, and challenges:

Feature GPT-OSS 20B GPT-OSS 120B
Speed Loads quickly and runs smoothly on regular consumer hardware with 16GB VRAM Loads much slower and requires powerful GPUs and large memory to run
Reasoning Depth Provides clear and straightforward explanations suitable for everyday questions and coding tasks Offers deep, detailed, and multi-step reasoning with complex and nuanced answers
Example Performance Solves basic math and algebra problems with concise step-by-step logic Breaks down complex topics like transformer architectures with long, layered responses and examples
Hardware Requirements Runs well on mid-range systems without special setup Needs advanced hardware like NVIDIA H100 GPUs and ample storage space
User Accessibility Very easy to use for most users without special hardware Limited to users with high-end, expensive equipment
Issues Faced Needed to clear background apps and cache to prevent storage and memory overload Faced serious storage and memory constraints requiring hardware upgrade and patience during long downloads
Overall Rating ⭐⭐⭐⭐ (4 out of 5) – Fast, efficient, and reliable for typical tasks ⭐⭐⭐⭐⭐ (5 out of 5) – Best for complex reasoning but requires heavy hardware investment

Which GPT-OSS Model Should You Choose? A Quick Decision Guide

Not sure whether to pick 20B or 120B? This simple guide will help you decide based on your needs and hardware setup.

Your Need Recommended Model Why
Fast responses on regular tasks GPT-OSS 20B Runs smoothly on most consumer laptops with quick answers
Complex reasoning and detailed explanations GPT-OSS 120B Offers deeper insights but requires high-end hardware and more time
Limited hardware/storage capacity GPT-OSS 20B Smaller model size fits better on average machines
You have powerful GPU and want full power GPT-OSS 120B Takes advantage of high VRAM and processing power
Need a balance between speed and depth Start with 20B, switch to 120B when needed Efficient for most tasks, scalable for complex queries
💡My Verdict: After testing both models, I’d use GPT-OSS 20B for fast, everyday tasks due to its speed and local usability. The 120B excels at deep reasoning but requires powerful GPUs, making it less practical for most users.

What Are the Challenges, Risks, and Limitations of GPT-OSS Models?

GPT-OSS offers strong capabilities, but it also carries important risks. Understanding them is essential for safe and responsible use.

Hallucinations and Accuracy Limits
GPT-OSS can produce answers that sound correct but are wrong or fabricated. This is risky in technical or high-stakes fields. Chain-of-thought reasoning and tool use can reduce errors, but human verification and trusted references remain essential.

Bias and Fairness
The model matches OpenAI’s proprietary models in bias tests but is not bias-free. Gaps in training data may cause weaker performance for certain groups or topics. Continuous audits and fine-tuning on diverse datasets improve fairness.

Safety Risks and Misuse
Open models can be misused. ESET discovered PromptLock, Golang ransomware for Windows and Linux that used GPT-OSS 20B via the Ollama API to generate Lua scripts for file scanning, data exfiltration, and encryption, highlighting the cyber risks of open-weight models.

No Guaranteed Truthfulness or Sources
GPT-OSS does not provide built-in citations or admit uncertainty. It may share outdated or incomplete information confidently. Linking it to updated databases can keep responses current.

Limited Multimodal Ability
GPT-OSS only processes text. It cannot handle images, scans, or visual data without other AI tools. Vision tasks require pairing it with separate models.

Regulatory and Ethical Considerations
Organizations are responsible for advice the model generates. Ethical use requires transparency, privacy protection, and proper consent when handling sensitive data.

Support and Maintenance
OpenAI will not provide continuous updates. Users must handle improvements, safety tuning, and security patches themselves to keep the model effective and safe.  

💡My Perspective: GPT-OSS marks a big step toward transparent AI, but users must handle it responsibly. Risks like hallucinations, bias, and misuse remain if safeguards are weak. The key challenge is balancing safety with openness in the AI community.

What Are People Saying About GPT-OSS 20B and 120B Across Different Platforms?

Here’s a quick roundup of what users and experts are discussing about GPT-OSS models on LinkedIn, Reddit, and YouTube highlighting their strengths, challenges, and real-world usability.

LinkedIn
Alex Xu, Co-Founder of ByteByteGo, shared a detailed breakdown of GPT-OSS 120B and 20B. He emphasized their efficient tokenization, Mixture-of-Experts architecture, and cost-effective performance, noting they aim to bring strong real-world results without huge expenses.

Reddit
Users on Reddit praise the models for being truly open-weight under Apache 2.0 license, allowing local use on modest hardware. The Mixture-of-Experts design gets positive remarks for customization potential.

However, concerns exist about heavy output filtering, higher hallucinations, weaker multilingual support, repetitive responses, and inconsistent coding accuracy. Some users prefer other open-source models due to these quirks.

YouTube
A content creator tested GPT-OSS 20B on a high-end Apple M3 Max laptop with 64GB RAM, reporting smooth and near-instant responses. They found Ollama the easiest way to install and interact with the models without needing command line skills.

While LM Studio also works, it requires more setup effort. They noted the 20B model runs efficiently on strong laptops, but the 120B model demands much more powerful desktop GPUs.

Across all platforms, users value the models’ openness and potential for local use but acknowledge hardware demands and output inconsistencies that affect choosing 20B for convenience or 120B for deeper reasoning; overall, feedback reflects cautious optimism with practical considerations.


The Future of OpenAI’s GPT-OSS Models and Transparent Agent Reasoning: What to Expect

GPT OSS models will push transparent agent reasoning forward by making AI more open, understandable, and specialized enabling users to trust and interact with AI in new, meaningful ways. Here’s what the future holds:

Larger and Specialized Open Models
We expect bigger, specialized open models. GPT-OSS 120B nearly matches GPT-4, so an open GPT-4 version may come soon. This may push others to open their models. Domain-specific GPT-OSS models for fields like medicine will emerge for better task handling.

Hybrid AI Systems
Open models like GPT-OSS will handle simple queries locally. Complex ones will go to powerful closed models like GPT-5. This saves time, protects privacy, and boosts accuracy.

Improving Truthfulness and Reducing Hallucinations
Researchers will link GPT-OSS to search engines and databases for fact-checking and source citation. Step-by-step reasoning will improve to avoid errors. User feedback will guide fixes for errors and biases.

Enhanced Safety Mechanisms
Safety features will be built-in and hard to remove. Techniques like constitutional AI and self-censorship keep models safe. Open-source filters will block harmful content. Sensitive areas like medicine may require human oversight.

Multimodal and Tool-Rich Extensions
GPT-OSS will gain abilities like image and chart analysis. Specialized tools for biomedicine, such as drug databases, will integrate without altering the core model. Research will focus on usability and reliability.

Ecosystem and Community Growth
OpenAI aims to build a strong open AI community based on shared values. Researchers will share fine-tuned models, data, and ideas, improving GPT-OSS’s safety, reasoning, and fairness while promoting innovation.

Performance and Efficiency Improvements
Engineers will optimize GPT-OSS for speed and memory use with quantization and pruning. Handling very long inputs (up to 128,000 words) will improve. These changes will make GPT-OSS faster, cheaper, and more accessible. 



FAQs

GPT-OSS is OpenAI’s open-source LLM series, available in Hugging Face Transformers (v4.55.0). It uses a Mixture-of-Experts architecture with 4-bit quantization for fast, efficient inference.

The latest model is GPT-5, built for coding and agentic tasks, achieving state-of-the-art results with 74.9% on SWE-bench Verified and 88% on Aider polyglot benchmarks.

GPT-3 is a large language model trained on internet text, requiring minimal input to generate coherent, context-relevant, and sophisticated text.

ChatGPT includes GPT-3.5, GPT-4, GPT-4 Turbo, and GPT-4o. GPT-4 is the most advanced, GPT-4 Turbo is optimized for speed, and GPT-4o handles multimodal tasks like text, images, and voice.

Final Thoughts

After testing OpenAI’s GPT-OSS Models 20B and 120B, it is clear they each bring unique strengths. The 20B is lightning-fast and perfect for daily tasks while the 120B offers powerful reasoning capabilities but requires the right hardware. Curious which one fits your needs?

People are really impressed by the models’ transparent reasoning and open-weight design which could change how we trust and use AI. Want to see how these innovations might reshape the future? Check out the key features and real user reactions!

Was this article helpful?
YesNo
Generic placeholder image
Senior Writer
Articles written 153

Asma Arshad

Writer, GEO, AI SEO, AI Agents & AI Glossary

Asma Arshad, a Senior Writer at AllAboutAI.com, simplifies AI topics using 5 years of experience. She covers AI SEO, GEO trends, AI Agents, and glossary terms with research and hands-on work in LLM tools to create clear, engaging content.

Her work is known for turning technical ideas into lightbulb moments for readers, removing jargon, keeping the flow engaging, and ensuring every piece is fact-driven and easy to digest.

Outside of work, Asma is an avid reader and book reviewer who loves exploring traditional places that feel like small trips back in time, preferably with great snacks in hand.

Personal Quote

“If it sounds boring, I rewrite it until it doesn’t.”

Highlights

  • US Exchange Alumni and active contributor to social impact communities
  • Earned a certificate in entrepreneurship and startup strategy with funding support
  • Attended expert-led workshops on AI, LLMs, and emerging tech tools

Related Articles

Leave a Reply