IBM Off Campus Drive : IBM off campus hiring for Data Engineer Role for B.E / B.Tech / M.E / M.Tech graduates and any batch graduates are eligible. The detailed company eligibility and application details are given below.

About IBM:
IBM’s greatest invention is the IBMer. We believe that progress is made through progressive thinking, progressive leadership, progressive policy and progressive action. IBMers believe that the application of intelligence, reason and science can improve business, society and the human condition. Restlessly reinventing since 1911, we are the largest technology and consulting employer in the world, with more than 380,000 IBMers serving clients in 170 countries.
Job Description:
In this role, you’ll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
Job Title: Data Engineer
Job Type: Full Time
Work Location: Bangalore
Experience : Entry level
Role and Responsibilities:
- As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include:
- Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
- Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
- Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too
Education and Skills :
- Developed the Pysprk code for AWS Glue jobs and for EMR.. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution..
- Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
- Developed Hadoop streaming Jobs using python for integrating python API supported applications..
- Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD’s were used to apply business transformations and utilized Hive Context objects to perform read/write operations..
- Re- write some Hive queries to Spark SQL to reduce the overall batch time
- Understanding of Devops.
- Experience in building scalable end-to-end data ingestion and processing solutions
- Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
How To Apply IBM Off Campus Drive ??
All interested and eligible candidates can apply before expired in the following link.
Apply Link : Click Here