EnCharge AI Logo

EnCharge AI

LLM Inference Deployment Engineer

Job Posted 10 Days Ago Posted 10 Days Ago
Be an Early Applicant
Remote
28 Locations
Mid level
Remote
28 Locations
Mid level
The LLM Inference Deployment Engineer will optimize and deploy large language models for high-performance inference, focusing on efficiency and low latency.
The summary above was generated by AI

EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.

About the Role

EnCharge AI is seeking an LLM Inference Deployment Engineer to optimize, deploy, and scale large language models (LLMs) for high-performance inference on its energy effiecient AI accelerators. You will work at the intersection of AI frameworks, model optimization, and runtime execution to ensure efficient model execution and low-latency AI inference.  

Responsibilities

  • Deploy and optimize LLMs (GPT, LLaMA, Mistral, Falcon, etc.) post-training from libraries like HuggingFace

  • Utilize inference runtimes such as ONNX Runtime, vLLM for efficient execution.

  • Optimize batching, caching, and tensor parallelism to improve LLM scalability in real-time applications.

  • Develop and maintain high-performance inference pipelines using Docker, Kubernetes, and other inference servers. 

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.

  • Experience in LLM inference deployment, model optimization, and runtime engineering.

  • Strong expertise in LLM inference frameworks (PyTorch, ONNX Runtime, vLLM, TensorRT-LLM, DeepSpeed).

  • In-depth knowledge of the Python programming language for model integration and performance tuning.

  • Strong understanding of high-level model representations and experience implementing framework-level optimizations for Generative AI use cases

  • Experience with containerized AI deployments (Docker, Kubernetes, Triton Inference Server, TensorFlow Serving, TorchServe).

  • Strong knowledge of LLM memory optimization strategies for long-context applications.

  • Experience with real-time LLM applications (chatbots, code generation, retrieval-augmented generation). 

EnchargeAI is an equal employment opportunity employer in the United States.

Top Skills

Deepspeed
Docker
Huggingface
Kubernetes
Onnx Runtime
Python
Tensorflow Serving
Tensorrt-Llm
Torchserve
Triton Inference Server
Vllm

Similar Jobs

2 Hours Ago
Easy Apply
Remote
28 Locations
Easy Apply
Mid level
Mid level
Artificial Intelligence • Machine Learning • Natural Language Processing • Conversational AI
Enhance product quality by developing automated tests and improving QA processes. Mentor team members and collaborate cross-functionally while maintaining and leading QA efforts in a specific product area.
Top Skills: AllureGitGitlab Ci/CdK6LocustPlaywrightPostmanPythonRest-AssuredSeleniumTestrailTypescript
13 Hours Ago
Easy Apply
Remote
28 Locations
Easy Apply
Senior level
Senior level
Artificial Intelligence • Machine Learning • Natural Language Processing • Conversational AI
The Senior Fullstack Developer will lead feature development, optimize performance and scalability, and enhance user experience within Smartcat's platform. Responsibilities include backend services and API development, production issue resolution, and collaboration with various teams.
Top Skills: .Net CoreAWSC#ElasticsearchJavaScriptKafkaMongoDBNuxtPostgresPythonTypescriptVue
15 Hours Ago
Easy Apply
Remote
28 Locations
Easy Apply
Senior level
Senior level
Artificial Intelligence • Cloud • Information Technology • Machine Learning • Natural Language Processing • Software
Develop and maintain scalable applications, innovate new features, monitor performance metrics, and ensure high availability and security in a collaborative environment.
Top Skills: AWSDockerGradleHeadless ChromeHibernateJavaJavaScriptLinuxLombokMySQLPythonRabbitMQReactRedisSpringTerraform

What you need to know about the Seattle Tech Scene

Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.

Key Facts About Seattle Tech

  • Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Amazon, Microsoft, Meta, Google
  • Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Madrona, Fuse, Tola, Maveron
  • Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute
By clicking Apply you agree to share your profile information with the hiring company.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account