
Big Data Developer
1 day ago
Position Summary:
We are seeking a highly skilled and experienced Big Data Developer to join our engineering team. The ideal candidate will have strong expertise in building and maintaining robust, scalable, and high-performance big data solutions using modern big data tools and technologies. You will be responsible for developing data pipelines, transform large volumes of data, and optimizing data infrastructure to enable real-time and batch processing for business insights and analytics
Mandatory Skills:
- Design, develop and maintain scalable and secure big data processing systems using tools like Apache Spark, Hadoop, Hive, Kafka etc.
- Build and optimize data pipelines and architectures for data ingestion, processing and storage from diverse source (structured, semi-structured, unstructured)
- Develop ETL/ELT workflows and data transformation logic using custom code in Python, Scala, Java and job scheduling tools such as Control-M.
- Translate business need/requirements into technical specifications, develop application strategy and high-level design documentation
- Involve on solution design discussions, competitive displacement, proof-of-concept engagements, solution demonstrations and technical workshops.
- Responsible to manage the development sprints effectively and efficiently from planning to execution to review in an agile development environment.
- Convert business knowledge into technical knowledge and vice – versa and able to translate those insights into effective frontline action.
- Responsible to build and manage end-to-end big data streaming/batch pipeline as per business requirements.
- Report and update the management/customer about the issues/blockers on time to mitigate risk.
- Collaborate with different teams, analysts and stake holders to understand data requirements and deliver efficient data solutions.
- Ensure data quality, consistency and security by implementing appropriate data validation, logging, monitoring, and government standards.
- Tune and monitor the performance of data systems, jobs and queries.
- Integrate and manage data platforms in Azure cloud environment.
- Mentor junior developers and support team best practices in code review, version control, and testing.
- Document system designs, data flows, and technical specifications.
- Proactively identify and eliminate impediments and facilitate flow.
- Experience in project management software (i.e., JIRA, Manuscript, etc.) to support task estimates and execution.
- Involving the engineering development projects and facilitates sprint releases.
- Create reusable assets / supporting artifacts based on the industry landscape
Mandatory Knowledge & Qualifications:
- Bachelor's or master's degree in computer science, data engineering, information technology, or related fields.
- 4+ years of experience in Big Data technologies.
- Should have experience in template-oriented data validation / cleansing framework.
- Strong programming skills in Python, Scala, Shell Scripting and Java.
- Extensive experience in data transformation and building ETL pipelines using PySpark.
- Strong knowledge in performance improvements of existing big data Streaming or Batch Pipeline.
- Proficiency in big data tools and frameworks such as Apache Spark, Hadoop, Kafka, and Hive.
- Strong proficiency in design and develop generic ingestion and streaming frameworks which can be used and leveraged by new initiatives/projects.
- Experience with NoSQL Databases – HBase and Azure Cosmos DB.
- Should have experience in optimizing high volume data load in Spark.
- Deep knowledge in Azure cloud platform.
- Experience in identifying complex data lineage using Apache Atlas.
- Strong knowledge of creating, scheduling and monitoring jobs using Control-M.
- Strong knowledge in Data analysis, Natural Language Processing (NLP), Machine Learning (ML) techniques, Data visualization.
- Deep understanding of data governance tool – Alex.
- Mandatory to have strong understanding of BFS domain.
- Build predictive models and machine-learning algorithms.
- Present information using data visualization techniques.
Required Skills & Technology Stack:
- Bachelor's or master's degree in computer science, data engineering, information technology, or related fields.
- 4+ years of experience in Big Data technologies.
- Should have experience in template-oriented data validation / cleansing framework.
- Strong programming skills in Python, Scala, Shell Scripting and Java.
- Extensive experience in data transformation and building ETL pipelines using PySpark.
- Strong knowledge in performance improvements of existing big data Streaming or Batch Pipeline.
- Proficiency in big data tools and frameworks such as Apache Spark, Hadoop, Kafka, and Hive.
- Strong proficiency in design and develop generic ingestion and streaming frameworks which can be used and leveraged by new initiatives/projects.
- Experience with NoSQL Databases – HBase and Azure Cosmos DB.
- Should have experience in optimizing high volume data load in Spark.
- Deep knowledge in Azure cloud platform.
- Experience in identifying complex data lineage using Apache Atlas.
- Strong knowledge of creating, scheduling and monitoring jobs using Control-M.
- Strong knowledge in Data analysis, Natural Language Processing (NLP), Machine Learning (ML) techniques, Data visualization.
- Deep understanding of data governance tool – Alex.
- Mandatory to have strong understanding of BFS domain.
- Build predictive models and machine-learning algorithms.
- Present information using data visualization techniques.
Required Skills & Technology Stack:
Primary Skills:
- Hadoop – Expert
- Apache Spark – Expert
- Apache Atlas – Expert
- Python – Expert
- Scala – Expert
- Shell Script – Expert
- PySpark – Expert
- Scikitlearn – Expert
- Matplotlib – Expert
- Azure Storage Explorer – Expert
Database Skills:
- HBase - Expert
- Hive - Expert
- Oracle – Expert
- SqlServer – Expert
- MySQL – Expert
- Cosmos DB - Expert
Other Skills:
- NLP – Expert
- ML – Expert
- Data Science – Expert
- Data Visualization – Expert
Tools:
- PyCharm/VSCode - Advanced
- Control M – Advanced
- Code Versioning Tool - Advanced
- Project Management tools (JIRA, Confluence, Service NOW) – Advanced
- MS Visio - Expert
- HP ALM/QC- Expert
Salary Range: $75,000-$85,000
Date of Posting: 25/August/2025
Next Steps: If you feel this opportunity suits you, or Cognizant is the type of organization you would like to join, we want to have a conversation with you Please apply directly with us.
For a complete list of open opportunities with Cognizant, visit Cognizant is committed to providing Equal Employment Opportunities. Successful candidates will be required to undergo a background check.
LI-CTSAPAC-
Big data
1 week ago
Sydney, New South Wales, Australia BULLIT MANAGEMENT SERVICES LIMITED Full time $80,000 - $240,000 per yearJob Description for Big Data Lead Engineer:Location : SydneyTechnical and Team Leadership:Provides technical direction and guidance to the engineering team.Ensures adherence to engineering standards and best practices.Participates in technical discussions and decision-making.Identifies and resolves technical challenges and issues.Conducts code reviews and...
-
Big Data Engineering Lead
2 weeks ago
Sydney, New South Wales, Australia Deloitte Full time $120,000 - $180,000 per year:Job Requisition ID:36903World class training and development.Truly flexible work.Hands-on mentoring on emerging technologies.What will your typical day look like?Deloitte AI & Data is changing the way that businesses leverage cloud-native technology to solve the toughest Data, Machine Learning and AI challenges faced by our customers Join us as we continue...
-
Staff / Lead Data Engineer (AWS, Big Data)
2 weeks ago
Sydney, New South Wales, Australia Commonwealth Bank of Australia Full time $120,000 - $180,000 per yearStaff / Lead Data Engineer (AWS, Big Data)You are determined to stay ahead of the latest Cloud, Big Data and Data warehouse technologies.We're one of the largest and most advanced Data Engineering teams in the country.Together we can build state-of-the-art data solutions that power seamless experiences for millions of customers.Do work that matters:As a...
-
Staff / Lead Data Engineer (AWS, Big Data)
2 weeks ago
Sydney, New South Wales, Australia Commonwealth Bank Full time $120,000 - $180,000 per yearYou are determined to stay ahead of the latest Cloud, Big Data and Data warehouse technologies.We're one of the largest and most advanced Data Engineering teams in the country.Together we can build state-of-the-art data solutions that power seamless experiences for millions of customers.Do work that matters:As a Staff Data engineer with expertise in software...
-
Big Data Lead Engineer
3 days ago
Sydney, New South Wales, Australia CareCone Group Full time $150,000 - $200,000 per yearRole: Big Data Lead EngineerLocation: SydneyJob Description:Technical and Team Leadership:Provides technical direction and guidance to the engineering team.Ensures adherence to engineering standards and best practices.Participates in technical discussions and decision-making.Identifies and resolves technical challenges and issues.Conducts code reviews and...
-
Big Data Lead Engineer
1 day ago
Sydney, New South Wales, Australia Zone IT Solutions Full time $180,000 - $250,000 per yearWe are looking for a Big Data Lead Engineer based in Sydney, proficient in Java, Scala/Spark, API development, and real-time streaming technologiesTechnical and Team Leadership:Provides technical direction and guidance to the engineering team.Ensures adherence to engineering standards and best practices.Participates in technical discussions and...
-
Data Engineer
5 days ago
Sydney, New South Wales, Australia NTT DATA Full time $80,000 - $140,000 per yearMake an impact with NTT DATAJoin a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it's a place where you can grow, belong and thrive.Your day at NTT DATAThe Data Engineer...
-
Data Engineer
1 day ago
Sydney, New South Wales, Australia Data#3 Full time $120,000 - $150,000 per year3 month initial contract + potential extensionMix of WFH and Central Sydney officeOverview:We are seeking an experienced Data Engineer to join a large-scale program for an initial 3-month contract. The role will focus on building and optimising data pipelines in an Azure environment, working closely with technical teams to deliver efficient, high-quality...
-
Power BI Developer
1 day ago
Sydney, New South Wales, Australia Data#3 Full time $90,000 - $120,000 per yearPower BI Developer (Contract)Location:Brisbane OR Sydney based (Hybrid)Contract Duration:Initial3 Months + potential to extendWe are seeking a skilledPower BI Developerto join a global leader in the travel and business services industry on a contract basis. In this role, you will play a key part in shaping global sales and account management dashboards,...
-
Data Engineer Consultant
3 days ago
Sydney, New South Wales, Australia NTT DATA, Inc. Full time $80,000 - $140,000 per yearMake an impact with NTT DATAJoin a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it's a place where you can grow, belong and thrive.Your day at NTT DATAThe Data Engineer...