Data Engineer

2018.10.02 Taiwan

[About this Job] 
We are looking for a savvy Data Engineer with technical backgrounds in data system and platform development. The hire will be responsible for expanding and optimizing our data and data pipeline infrastructure, as well as optimizing data flow and collection for cross functional teams. If you want to join a world-class development team of LINE, we look forward to hearing from you soon!


[Responsibilities]
- Develop a next generation data processing and analytics system for Taiwan market and integrate with global platform. 
- Work with TW project teams to identify, design, and implement internal data process improvements; automating manual processes, optimizing data delivery. 
- Build analytics tool that utilize the data pipeline to provide insights to customer needs, operational efficiency and other key business performance metrics. 
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Work closely with our development teams at Tokyo and Seoul.
- As a member of New Initiatives Task Force, you might also:
   - Create APIs and services that allow third-parties to integrate with and utilize LINE platform. 
   - Design and build core, backend software components for Messaging, Social Graph and Partnership platform. 


[Qualifications]

* Required
- At least 3 years of hands-on software development experience with Java, Scala or Python. 
- B.S. or M.S. Computer Science or related fields. 
- Experience building distributed service and of handling big data preferred. 
- Fluent in English.
- Experience with big data tools or stream-processing systems: Hadoop, MapReduce, Zookeeper, HDFS, HBase, Hive, Spark, Kafka, Storm, Spark-streaming, etc..
- Experience in custom ETL design, implementation and maintenance on Hadoop clusters.
- Experience with multi-threaded programming and debugging.
- Proficiency in Linux and shell scripting.
- Good understanding of distributed system, basic mathematics such as statistics and probability.

* Preferred 
- Experience building and optimizing "big data" data pipelines, architectures and data sets. 
- Analytic skills related to working with unstructured datasets.
- Technical capacity to understand and implement at-least-once / exactly-once delivery guarantee in distributed data pipeline.
- Strong SQL skills, especially in the area of data aggregation.
- Experience with troubleshooting/tuning JVM GC.
- Experience with Maven and GIT.
- Experience with A/B testing environment.


[Location]

Neihu Dist., Taipei, Taiwan

List