AWS Data Engineer
Experience:
10+ Years
Location:
Chicago, IL (Local Only)
Mandatory Skills:
Python, PySpark, AWS
Good to Have Skills:
EMR, Spark, Kafka/Kinesis
What is in it for you?
As a Data Engineer with deep expertise in Python, AWS, big data ecosystems, and SQL/NoSQL technologies, you will be responsible for driving scalable, real-time data solutions leveraging CI/CD and stream-processing frameworks.
Responsibilities:
- Proficient developer in multiple languages — Python is a must — with the ability to quickly learn new ones.
- Expertise in SQL (complex queries, relational databases preferably PostgreSQL, and NoSQL databases such as Redis and Elasticsearch).
- Extensive big data experience, including EMR, Spark, Kafka/Kinesis, and optimizing data pipelines, architectures, and datasets.
- AWS expert with hands-on experience in Lambda, Glue, Athena, Kinesis, IAM, EMR/PySpark, and Docker.
- Proficient in CI/CD development using Git, Terraform, and Agile methodologies.
- Comfortable with stream-processing systems (e.g., Storm, Spark-Streaming) and workflow management tools (e.g., Airflow).
- Exposure to knowledge graph technologies (Graph DB, OWL, SPARQL) is a plus.
Educational Qualifications:
Engineering Degree – BE/ME/BTech/MTech/BSc/MSc.
Technical certification in multiple technologies is desirable.