Sr Data Systems Admin


May 11, 2018
Pasadena, California



Senior Data Systems Admin
Contract to Contract
Pasadena, CA

Job Description
You will be responsible for maintaining and improving service uptime and scaling our systems for continued rapid growth. This role will handle planned and unplanned maintenance events, participate in on-call, and support a consistent software release process. The ideal candidate has experience with management of physical servers on-premises at scale and on-demand, burstable virtualized environments hosted on public cloud providers and the operation of distributed systems with hundreds of nodes and global replication. Excellent communication skills are required in order to successfully interact with the rest of Client Engineering. This is a “hands-on” role and competence is expected in Linux fundamentals and the UNIX command-line. 

Developing and supporting our infrastructure presents many interesting technical challenges. We especially desire candidates with a passion for open-source software and an interest in the latest system architecture trends.

Must have:
MySQL, PostgreSQL
Riak, Redis, and other KV Store Databases
Scripting / automation
Linux command line

Nice to have: 
Large scale Cloudera / Hadoop cluster Administration (1000+ nodes preferred)
Kafka, HBase, Spark
RabbitMQ
Chef or equivalent 

 Responsibilities:
  • Design, implement, and support highly-performant, highly-available infrastructure with both on-premises hardware and public clouds such as AWS, GCP, or equivalent providers.
  • Improve the efficiency and flexibility of our data centers.
  • Build and maintain models for growth and capacity planning.
  • Tune large-scale data clusters for optimal performance and efficiency.
  • 24/7/365 on-call rotation.
  • Own the day-to-day health, uptime, monitoring, and reliability of all data platforms and database systems.
  • Work closely with project management and engineering peers to develop innovative technical tools and solutions.
  • Identify tactical issues and react to emerging areas of concern.
  • Adhere to a DevOps philosophy by evangelizing communication, collaboration, and integration with software development teams.
  • Think long-term and be unsatisfied with band-aids.
  • Identify unnecessary complexity and remove it.

Requirements:
  • At least four years experience in Data or Database Operations, Site Reliability Engineering, System Administration or equivalent roles.
  • At least three years experience maintaining a production infrastructure hosted on AWS, GCP, or equivalent public cloud providers.
  • Demonstrated experience in network and large scale UNIX system troubleshooting and maintenance practices.
  • Capability to script and automate solutions with strong competence in at least one programming language.
  • Solid knowledge of UNIX command-line tools.
  • Strong understanding how to manage public cloud services and tasks, such as: load balancing, automation through provider API, VPC, serverless computing (Lambda, GCF), backup/restore procedures, and managing policies and resources.
  • Firm grasp of storage protocols and filesystems.
  • Deep experience installing and managing one or more of the following: Hadoop clusters and related services, RDBMS platforms (e.g. MariaDB / MySQL, PostgreSQL, Vertica), distributed data systems (e.g. Riak, Druid, Kafka)
  • Implementation and management of monitoring and metrics tools (e.g. Nagios, Graphite, Grafana, SumoLogic).
  • Excellent organizational skills and the ability to work in a fast-paced and hectic work environment.
  • Capable of technical deep-dives into code, networking, systems, and storage with SRE and software engineering.
  • Willing to occasionally travel to different office locations.
  • Knowledge and interest in the latest system architecture trends.
  • Ability to learn and understand new systems.
  • Ability to communicate effectively and write accurate, clear documentation.
  • Humility and integrity.

Nice to Have:
  • Running and troubleshooting Erlang, Java or Python applications.
  • Hardware configurations for data systems.
  • Other operational data technologies like HBase, Spark, Redis and RabbitMQ.
  • Analytical data platforms like Vertica and MicroStrategy.
  • Hadoop-based computational technology like YARN and Impala.
  • Configuration management systems like Salt Stack and Chef.
  • Container technology like Docker and Kubernetes.
  • Experience with Cloudera.
  • Agile development practices.

APPLY