#sivak@arkhyatech.com #contractor #hiring #onsite #mountainview #california #java #scala #spark #dataengineer Scala/Java Spark Data Engineer : Mountain View CA (Day 1 onsite) Must have: Strong understanding of #Spark in implementing Batch and Streaming ETL pipelines. In depth knowledge of #Scala and #Java programming, good understanding of Data structures and algorithms. API development experience using Java Spring Boot. Knowledge of Microservice architecture and design patterns. Good exposure to AWS cloud services. Experience working with SQL and NoSQL. Implementing scalable data pipelines using parallel distributed framework like Spark with Scala , Java. Microservice application development, API development with Java Spring boot. Good to have: Hive, Kubernetes (Spark on Kubernetes), Docker
SIVAKUMAR N’s Post
More Relevant Posts
-
➡️ Role: Sr. Data Engineer with Scala & Java Exp. ➡️ Location: Mountain View CA(Onsite) ➡️ Duration: 12+ Month Contract ➡️ Max rate is $60/hr on C2C basis. ➡️ Job Description ➡️ Must have: 🔹 Strong understanding of Spark in implementing Batch and Streaming ETL pipelines. 🔹 In depth knowledge of Scala and Java programming, good understanding of Data structures and algorithms. 🔹 API development experience using Java Spring Boot. 🔹 Knowledge of Microservice architecture and design patterns. 🔹 Good exposure to AWS cloud services. 🔹 Experience working with SQL and NoSQL. 🔹 Implementing scalable data pipelines using parallel distributed framework like Spark with Scala , Java. 🔹 Microservice application development, API development with Java Spring boot. ➡️ Good to have: Hive, Kubernetes (Spark on Kubernetes), Docker If you're interested, feel free to message me directly on LinkedIn else reach out on email "emannaseer2003@gmail.com" or comment below, and I'll get back to you! #SrDataEngineer #ScalaDeveloper #JavaDeveloper #SparkEngineer #ETLPipelines #BatchProcessing #StreamingETL #Microservices #JavaSpringBoot #APIdevelopment #AWSCloud #NoSQL #SQL #DataEngineering #CloudComputing #Kubernetes #Docker #SparkOnKubernetes #DataPipeline #TechJobs #HiringNow
To view or add a comment, sign in
-
I’m #hiring. Know anyone who might be interested? #applyhere : ngowtham@cliecon.com #clieconsolutions #bigdata #scala #python #machinelearning #hadoop #bigdataengineer #bigdatadeveloper #w2 #w2contract #jobposting #jobpost #innovative #innovation #linkedin #linkedincommunity #love #career #computerengineer #computerscience #computersciences
To view or add a comment, sign in
-
NO #H1b and #OPT Need Local to #OH Role: #Platform_Engineer_1 Job Type: Contract Location: #Cincinnati, #OH – Local MUST jeremy@provincesoft.com / 3132178234 Ex 110 Skills: 1) #Big_data_technology (#Kafka, #Hadoop, #Spark, etc) 2) #DBA to create #ETL and #Data_Warehouse_system 3) #PHP, #Ruby, #Python, #JavaScript, #Elixir, #Go or #comparable 4) #software (#puppet, #ansible), and/or #orchestration_software (#docker #swarm, #kubernetes
To view or add a comment, sign in
-
Dear Network, We are seeking #PythonDeveloper #Neo4J #Kafka #DataIntegration #CDC #CI/CD #SoftwareEngineering #GraphDatabase #DataEngineering #HartfordJobs #ConnecticutTech #TechJobs #PythonJobs #SoftwareDevelopment #DataProcessing #JobOpening #HiringNow #DeveloperJobs #TechCareer #Programming #SoftwareTesting #UnitTests #Documentation #AgileMethodology
Dear Network, RSN GINFO SOLUTIONS is seeking: Job Title: Python Developer - Neo4J Graph Database Integration Location: Hartford, Connecticut We are seeking an exceptional Python Developer, who plays a crucial role in developing an application to integrate Neo4J graph database with Kafka topics. Responsibilities: Develop Python application to seamlessly integrate Neo4J graph database with Kafka topic. Design and implement efficient data loading mechanisms to handle large volumes of transactions using change data capture (CDC) techniques. Collaborate with cross-functional teams including data engineers, data scientists, and software developers to ensure smooth integration and optimal performance. Write clear, maintainable, and well-documented code following best practices. Develop comprehensive unit tests to ensure the reliability and robustness of the application. Create detailed technical documentation to facilitate ease of understanding and future maintenance. Implement continuous integration and continuous delivery (CI/CD) pipelines for the Python application. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience (5+ years) working as a Python Developer, preferably in a data-intensive environment. Strong proficiency in Python programming language and experience with Python frameworks (e.g., Flask, Django). In-depth understanding of Neo4J graph database and experience with graph data modeling. Hands-on experience with Apache Kafka and knowledge of Kafka Connect for data integration. Familiarity with change data capture (CDC) techniques and real-time data processing. Solid understanding of software development lifecycle (SDLC) and agile methodologies. Experience with writing unit tests using testing frameworks such as pytest. Excellent communication skills and ability to collaborate effectively with cross-functional teams. Prior experience implementing CI/CD pipelines using tools like Jenkins, GitLab CI, or similar. Preferred Qualifications: Master's degree in Computer Science or a related field. Experience working with cloud platforms such as AWS, Azure, or Google Cloud Platform. Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes. Understanding of streaming data processing frameworks such as Apache Flink or Apache Spark Streaming. Familiarity with data visualization tools such as D3.js or Plotly. #PythonDeveloper #Neo4J #Kafka #DataIntegration #CDC #CI/CD #SoftwareEngineering #GraphDatabase #DataEngineering #HartfordJobs #ConnecticutTech #TechJobs #PythonJobs #SoftwareDevelopment #DataProcessing #JobOpening #HiringNow #DeveloperJobs #TechCareer #Programming #SoftwareTesting #UnitTests #Documentation #AgileMethodology #USjob
To view or add a comment, sign in
-
📌New Requirement on C2C/W2 ✅: Job Title: AWS Architect with Spark & Scala Location: 100% Remote Skills: Very strong coding skills (Scala, java, python, ruby, JSON, XML, etc). Design end to end PDP data solutions, Experience architecting in an AWS environment. 📩 Email the resume to tarun.manukonda@srsconsultinginc.com #aws #awsarchitect #dataengineer #redshift #dataarchitect #scalaarchitect #architect #scala #spark #java #python #ruby #json, #xml #PDP #coding #programming #architectspark #architectscala
To view or add a comment, sign in
-
hashtag #Hiring_W2_Hybrid hashtag #Data_Engineer with hashtag #Java Houston TX (Onsite) New Jersey, NJ Columbus, OH hashtag #W2 Candidates only hashtag #USC_GC_H4 Only Must have: hashtag #Java, hashtag #Spark, hashtag #AWS. Java/Spring + Spark/SQL/AWS, should be very good in building data pipelines – Primary Skillset, MUST HAVE They will be working in a Java stack, but they MUST BE very strong on the Spark/Data Engineering side of things Datalake/Snowflake/Kubernetes – These are a HUGE plus – They use these daily Email: ganesh@aptoninc.com hashtag #Data_Engineer hashtag #Java hashtag #AWS hashtag #Spring hashtag #Spark hashtag #SQL hashtag #Houston hashtag #JerseyCity hashtag #aptonInc
To view or add a comment, sign in
-
Hello Linkies, I'm hiring for Sr. Full Stack Backend Developer for Bangalore Location. Experience : 5 - 7 Years Overview of Role: * Competent in building enterprise-scale web services and data engineering tasks, including ETL pipelines, with some front-end development experience. * Proficient in backend technologies such as Node.js (for web services), TypeScript, JavaScript (ES6), with a preference for knowledge in Go Lang. * Familiar with other programming languages including Golang and Python (especially PySpark). * Some practical experience in front-end web development using React, TypeScript, and JavaScript (ES6). * Skilled in CI/CD methodologies using tools like Docker and Kubernetes. * Experienced in deploying applications using AWS infrastructure, including services like Lambda, SQS, SNS, Glue, and Step Functions. Database Skills: * Proficient in at least one SQL database (MySQL, PostgreSQL, Oracle) and one NoSQL document store (DynamoDB, Elastic Search, MongoDB, Couchbase). Additional Technical Skills: * Experience with NoSQL databases: Document Stores (DynamoDB, Elastic Search, MongoDB, Couchbase) and Columnar Databases such as Cassandra. * Knowledge of ANSI SQL, including Stored Procedures, Triggers, and Functions. * Exposure to Big Data Warehousing and Data Engineering tools like Redshift, AWS Lambda, and Glue. Share updated CV at dixitakothari.adt@gmail.com Gauri Dhakad #fullstack #backendfullstack #golang #python #nodejs #Redshift #Lambda #AWS #glue #datawarehousing #dataengineering #nosql #mysql #postgres #mongodb #react #docker #kubernetes #cicd #etl #etlpipelines #javascript #typescript #Couchbase #Cassandra #Elasticsearch
To view or add a comment, sign in
-
Hello Folks! We are #hiring We have #directclientrequirements if you are interested, drop your resume to prathyusha.n@aplombtek.com Open positions : Data Engineer Golang Developer Python Developer Note : Only W2 No C2C #DataEngineer #DataEngineering #BigData #DataPipeline #ETL #DataIntegration #DataScience #DataAnalytics #DataWarehousing #DataLake #CloudData #MachineLearning #AI #DataOps #Hadoop #Spark #Kafka #DatabaseManagement #DataArchitecture #DataManagement #SQL #NoSQL #AzureData #AWSData #Golang #GoLangDeveloper #GoProgramming #GoLangCommunity #GoDev #GoCode #GoLangTips #GoLangProjects #GoLangTutorials #GoLangLearning #BackendDevelopment #Microservices #CloudNative #GoLangFramework #Concurrency #GoLangWeb #GoLangMicroservices #GoLangTools #GoLangJobs #GoLangLibraries #Python #PythonDevelopern #PythonProgramming #PythonCode #PythonDev #PyCon #PythonCommunity #PythonScripts #PythonProjects #LearnPython #PythonCoding #PythonLearning #Pythonista #PythonTips #Django #Flask #DataScience #MachineLearning #PyBites
To view or add a comment, sign in
-
-
RECRUITER: This #dataengineering position requires someone good in #Java. which areas do you use java as a #dataengineer? Me: 1. #DataPipelineDevelopment: Java is often used to build robust and scalable data pipelines for processing, transforming, and moving data between systems. You can leverage Java libraries and frameworks like #ApacheBeam or Spring Batch for this purpose. 2. #BigDataProcessing: Java is widely used in the big data ecosystem, especially with frameworks like #ApacheHadoop and #ApacheSpark. You can write MapReduce jobs, Spark applications, or Hadoop workflows in Java to process large volumes of data efficiently. 3. #DataIntegration: Java can be used to integrate various data sources and systems. You might build connectors or adapters in Java to interact with databases, APIs, message queues, or other data sources, enabling seamless data flow within your organization's infrastructure. 4. #CustomDataAnalysis Tools: Java's flexibility allows you to develop custom data analysis tools or libraries tailored to your specific requirements. Whether it's implementing complex algorithms, statistical analysis, or machine learning models, Java provides a solid foundation for building such tools. 5. #RealtimeDataStreaming: Java is well-suited for developing real-time data streaming applications. You can use frameworks like #ApacheKafka or #ApacheFlink along with Java to process and analyze streaming data in real-time, enabling instant insights and actions based on incoming data streams. #data #datascience #machinelearning #dataengineering
To view or add a comment, sign in
-
-
Job Position : Python Backend Developer Experience:8 years Location: Remote Contract-6+ Months Notice Period : 0-30 days Skills: Python, Object-oriented design, Algorithms, SQL, PostgreSQL, System architecture, Design patterns, Scalability, Software engineering best practices, Coding standards, Code reviews, Testing, Source control, AWS, Azure, GCP, Fintech, Quantitative models, Data-driven applications, Tableau, Power BI, Pandas, NumPy, Matplotlib. Share your resume to : k.v.arunsathyan507@gmail.com #Pythonprogramming #Pythondevelopment #Pythonlibraries #Datamanipulation #Pythonscripting #OOP #Classdesign #Inheritance #Polymorphism #Encapsulation #Abstraction #Designpatterns #Sorting #Searching #Graphalgorithms #Dynamicprogramming #Timecomplexity #Spacecomplexity #StructuredQueryLanguage #SQLqueries #Dataretrieval #Datamanipulation #Relationaldatabases #PostgreSQLdatabase #PostgreSQLqueries #SQLoptimization, #Databaseadministration #PostgreSQLdesign #Systemdesign #Microservices #Distributedsystems #Architecturepatterns #Faulttolerance #Loadbalancing #Singleton #Factory #Observer #Strategy #Builder #Adapter #MVC #Creational #Structural #Behavioralpatterns #Horizontalscaling #Verticalscaling #Distributedsystems #Loadbalancing #Autoscaling #Elasticity #Agilemethodologies #Scrum #Continuousintegration #Codequality #Modulardesign #Documentation #Refactoring #Codeconventions #Styleguides #Namingconventions #Cleancode #Codeconsistency #Peerreview #Codequality #Collaboration #Refactoring #Bugdetection #Feedbackprocess #Unittesting #Integrationtesting #Testdrivendevelopment (TDD) #Automatedtesting #Testcoverage #Mocking #Git #Versioncontrol #GitHub #GitLab #Branchingstrategies #Commithistory #Cloudcomputing #EC2 #S3 #Lambda #RDS #CloudFormation #AWSservices #MicrosoftAzure #Azureservices #Cloudinfrastructure #AzureFunctions #VirtualMachines #AzureDevOps #Financialtechnology #Paymentssystems, #Blockchain #Cryptocurrency #Digitalbanking #Financialservices #Financialmodeling #Statisticalanalysis #Machinelearningmodels #Riskmodeling #Pricingmodels #Dataanalytics #Datascience #Dataengineering #Datapipelines #Realtimedataprocessing #Datavisualization #BItools #Tableaureports #Dashboards #TableauServer #TableauPrep #Datamanipulation #Dataframes #Dataanalysis #Pandaslibrary #Datacleaning #Numericalcomputation #Arrays #NumPyarrays #Linearalgebra #Numericaloperations #Datavisualization #Plotting #Charts #Graphs #Dataplotting, #Matplotliblibrary
To view or add a comment, sign in