Specialist Solutions Architect - Data Engineering

Posted 25 Days Ago
Be an Early Applicant
Hiring Remotely in United States
Remote
140K-247K Annually
Senior level
Big Data • Software
The Role
In this role, you will guide customers in building big data solutions on Databricks, provide technical leadership for big data projects, architect data pipelines, and become a technical expert in data lake technology and ingestion workflows. You will also mentor others and contribute to the Databricks Community.
Summary Generated by Built In

P-226

This role can be remote. 

As a Specialist Solutions Architect (SSA) - Data Engineering, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark™ and expertise in other data technologies.  SSAs help customers through design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Data Intelligence Platform. As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be streaming, performance tuning, industry expertise, or more.

The impact you will have:

  • Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
  • Architect production level data pipelines, including end-to-end pipeline load performance testing and optimization
  • Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows
  • Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures
  • Provide tutorials and training to improve community adoption (including hackathons and conference presentations)
  • Contribute to the Databricks Community

What we look for:

  • 5+ years experience in a technical role with expertise in at least one of the following:
    • Software Engineering/Data Engineering: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions
    • Data Applications Engineering: Build use cases that use data - such as risk modeling, fraud detection, customer life-time value
  • Extensive experience building big data pipelines
  • Experience maintaining and extending production data systems to evolve with complex needs
  • Deep Specialty Expertise in at least one of the following areas:
    • Experience scaling big data workloads (such as ETL) that are performant and cost-effective
    • Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP
    • Experience with large scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines
    • Expert with cloud data lake technologies - such as Delta and Delta Live
  • Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience
  • Production programming experience in SQL and Python, Scala, or Java
  • 2 years professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures
  • 2 years customer-facing experience in a pre-sales or post-sales role
  • Can meet expectations for technical training and role-specific outcomes within 6 months of hire
  • Can travel up to 30% when needed

Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents base salary range for non-commissionable roles or on-target earnings for commissionable roles.  Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks utilizes the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.


Zone 1 Pay Range

$139,800$247,300 USD

Zone 2 Pay Range

$139,800$247,300 USD

Zone 3 Pay Range

$139,800$247,300 USD

Zone 4 Pay Range

$139,800$247,300 USD

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. 

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Top Skills

Java
Python
Scala
SQL
The Company
Seattle, WA
0 Employees
Hybrid Workplace
Year Founded: 2013

What We Do

As the leader in Unified Data Analytics, Databricks helps organizations make all their data ready for analytics, empower data science and data-driven decisions across the organization, and rapidly adopt machine learning to outpace the competition.

Gallery

Gallery

Similar Jobs

Databricks Logo Databricks

Specialist Solutions Architect - Data Engineering

Big Data • Machine Learning • Software • Analytics • Big Data Analytics
Remote
United States
2200 Employees
140K-247K Annually
Remote
United States of America
500 Employees

Strive Health Logo Strive Health

Sr. Specialist, Quality Assurance

Artificial Intelligence • Healthtech • Machine Learning • Social Impact • Analytics • Telehealth • Generative AI
Easy Apply
Remote
5 Locations
700 Employees

JumpCloud Logo JumpCloud

Sales Engineer - United States

Cloud • Information Technology • Security • Software
Easy Apply
Remote
Phoenix, AZ, USA
800 Employees
125K-165K Annually

Similar Companies Hiring

Atlassian Thumbnail
Software • Security • Productivity • Information Technology • Cloud • Automation • App development
US
11000 Employees
BAE Systems, Inc. Thumbnail
Software • Security • Information Technology • Hardware • Defense • Cybersecurity • Aerospace
Redmond, WA
40000 Employees
Lowe’s Thumbnail
Software • Retail • Information Technology • eCommerce • Consumer Web • App development • Analytics
Kirkland, WA
300000 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account