JP Morgan Chase hiring Software Engineer II - Data Engineer

 


Company : JP Morgan Chase

Role : Software Engineer II - Data Engineer

Experience : 3 years

Qualification : Graduate

Location : Bengaluru

As a Software Engineer II at JPMorgan Chase in the Corporate Data Services division, you play a vital role as an experienced member of an agile team tasked with the design and delivery of reliable, industry-leading technology products in a secure, stable, and scalable manner. 

Your responsibilities include implementing essential technology solutions across diverse technical domains within different business functions, all aimed at supporting the firm’s strategic business goals.

Responsibilities of the Position :

✔️Implements software solutions through design, development, and technical troubleshooting, demonstrating the capacity to think creatively beyond standard methodologies to devise solutions or resolve technical challenges. 

✔️Develops secure, high-quality production code and oversees algorithms that operate in synchronization with relevant systems. 

✔️Creates architectural and design documentation for intricate applications, ensuring that the software code development adheres to established design constraints. 

✔️Collects, analyzes, synthesizes, and visualizes data from extensive and varied datasets to facilitate the ongoing enhancement of software applications and systems. 

✔️Proactively uncovers latent issues and trends within data, leveraging these insights to enhance coding standards and system architecture. 

✔️Engages with software engineering communities and events that investigate new and emerging technologies. 

✔️Fosters a team culture that values diversity, equity, inclusion, and respect.

Qualification & Skills :

✔️Possession of formal training or certification in software engineering principles, along with a minimum of three years of practical experience.  

✔️Demonstrated experience in the design and implementation of data pipelines within a cloud environment is essential (e.g., Apache NiFi, Informatica).  

✔️A strong background in migrating and developing data solutions on the AWS cloud is necessary, including familiarity with AWS Services and Apache Airflow.  

✔️Experience in constructing and implementing data pipelines utilizing Databricks, including components such as Unity Catalog, Databricks Workflow, and Databricks Live Table.  

✔️A comprehensive understanding of agile methodologies, including CI/CD, application resiliency, and security practices.  

✔️Practical experience in object-oriented programming with Python (particularly PySpark) to develop complex, highly optimized queries for large datasets.  

✔️Familiarity with big data technologies such as Hadoop/Spark, as well as expertise in data modeling and ETL processes.  

✔️Proficient in data profiling and advanced PL/SQL procedures.

✔️Preferred qualifications, competencies, and expertise include:

✔️Proficiency in Oracle, ETL processes, and data warehousing, with additional knowledge of cloud technologies considered advantageous.

✔️Experience with cloud-based solutions is also beneficial.

Apply : Click Here 


0 Comments

Post a Comment

Post a Comment (0)

Previous Post Next Post