Description
IMPORTANT
3 Top skills must be seen on the resume
Scala Programming Language – With 3-4 Years of experience Spark Programming Language – With 3-4 Years of experience Cloudera or Hortonworks environment experience Intermediate to Senior role
Any testing in interviews? If so please provide details? Yes, in-person interview will involve some coding questions to test their knowledge of Scala\Spark and how to operate in a Hadoop environment. This would be done on a whiteboard or paper and exact syntax is not a must, but the candidate must demonstrate knowledge of the tools and skill in programming.What types of projects will this candidate be working on? - Building workflows to bring data into Hadoop from other databases or files or streaming.
- Building efficient queries in Hadoop (Impala and Hive)
- Understanding and supporting existing workflows to ensure proper operation
Typical hours worked? 9am to 5pm (flexible), 37.5h per week. Why has this position arisen, backfill? The position is to backfill a contractor who left Bell. We have open projects that require completion. Any potential to hire Full time? Yes, there is always potential for an exceptional candidate to become full time (pending approvals) Flex hours, possible to work from remote? Presence in office is requested with remote work possible on occasion if needed.Description
Responsible for the development, design, and implementation of application systems. Designs and codes programs, including the ability to test their coding, find errors, and correct codes to provide quality coding. Interfaces with technical team to design and implement application systems.Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and supportDevelop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysisEnsure Big Data practices integrate into overall data architectures and data management principles (e.g. data governance, data security, metadata, data quality)Create formal written deliverables and other documentation, and ensure designs, code, and documentation are aligned with enterprise direction, principles, and standardsTrain and mentor teams in the use of the fundamental components in the Hadoop stackAssist in the development of comprehensive and strategic business cases used at management and executive levels for funding and scoping decisions on Big Data solutionsTroubleshoot production issues within the Hadoop environmentPerformance tuning of a Hadoop processes and applicationsProven experience as a Hadoop Developer/Analyst in Business Intelligence and Data management production support space is needed.Bachelor in Computer Science, Management Information Systems, or Computer Information Systems is required.Minimum of 2 years of building and coding applications using Hadoop components - HDFS, Kafka, Flume, Hbase, Hive, Sqoop.Minimum of 2 years of coding Java, Scala / Spark, Python, Hadoop Streaming, HiveQLMinimum 4 years experience of traditional ETL tools & Data Warehousing design.Strong personal leadership and collaborative skills, combined with comprehensive, practical experience and knowledge in end-to-end delivery of Big Data solutions.Experience in Sysadmin, Exadata and other RDBMS is a plus.Must be proficient in SQL/HiveQLHands on expertise in Linux/Unix and scripting skills are required.Strong communication, technology awareness and capability to interact work with senior technology leaders is a mustGood knowledge on Agile Methodology and the Scrum processDelivery of high-quality work, on time and with little supervisionCritical Thinking/Analytic abilitiesWork Address Details - CREEKBANK ROAD, Mississauga