competitive salary
USA
Information Technology, Engineering
English
in-office, flexible
about the company
Slack has transformed business communication. It’s the leading channel-based messaging platform, used by millions to align their teams, unify their systems, and drive their businesses forward. Only Slack offers a secure, enterprise-grade environment that can scale with the largest companies in the world. It is a new layer of the business technology stack where people can work together more effectively and connect all their other software tools and services.
diversity statement
"Prioritizing diversity, engagement and belonging remains a crucial part of strengthening and maintaining Slack’s culture in a digital-first world."
your area of responsibility
Design and develop scalable and resilient information retrieval infrastructure to power search and other products.
Build and integrate scalable backend systems, platforms, and tools that power our data warehouse and help our partners implement, deploy, and analyze data assets.
Develop and maintain ETL processes to ensure data quality and consistency.
Collaborate with data scientists and machine learning engineers to deploy machine learning models for semantic retrieval in our own kubernetes-based deployment system, working with tools like Chef and Hashicorp Terraform.
Optimize data storage and retrieval to support real-time search queries and recommendations.
Monitor and troubleshoot data pipelines in production.
Work with the Search and ML Infrastructure teams to maintain and improve various data pipelines.
Mentor other engineers and deeply review code.
Improve engineering standards, tooling, and processes.
your profile
Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.
5+ years of relevant technical experience, including significant experience in data engineering, with a focus on search.
Experience with search technologies such as Elasticsearch, Solr, or Lucene.
Proficiency in programming languages such as Python, Java, or Scala.
Experience with big data technologies such as Airflow, EMR, Hadoop, Hive, Spark, and Kafka.
Strong knowledge of SQL and NoSQL databases.
Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization (e.g., Docker, Kubernetes).
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills.
Preferred Qualifications:
Knowledge of natural language processing (NLP) techniques and tools.
Experience with A/B testing and experimentation frameworks.
Familiarity with data visualization tools and techniques.
Experience with vector-based retrieval systems like Vespa, Milvus, or Solr.
Experience with ML model serving frameworks/toolkits like Kubeflow, MLflow, Sagemaker, and AWS Bedrock.
the benefits
Discover them on our website!