Starting with its communication app, LINE has offered a great range of services around the world, which allows us to now collect and use over 1,000 types of data to enhance services by various means, such as through engineering, planning, and data science.
Our team Hadoop Part develops and operates numerous pieces of software for Hadoop and its ecosystem, which underlie the data analysis platform provided by LINE's Data Platform Department. Our main mission is to provide highly reliable systems for petabyte-scale clusters of over 1,000 servers and increase the platform's usability. We are also committed to tasks that are unique to large-scale platforms, such as designing data centers and creating redundant architectures in collaboration with Infrastructure teams, fixing bugs and contributing to the OSS of the Hadoop ecosystem. Also, given the significance and sensitivity of data on our platform, we make efforts to regularly enhance security and policies.
<About Hadoop Part>
Site Reliability Engineering may imply simple, non-creative operational tasks.
In reality, however, the issues we handle are so complex that the average Hadoop user would be unable to experience them or find solutions by themselves.
This is mainly because our big data analysis platform is one of the most scalable platforms in the world, and our data sets, which are collected from numerous local and global services, are significantly diverse. We take on these challenges leveraging our flexible mindset and strong motivation for continuous improvement. As part of our team's initiatives, we are also driving task automation within multiple layers to secure sufficient time for development without sacrificing the quality of the user support we offer.
・Stabilize and improve operation of large-scale Hadoop clusters
・Develop robust security system for highly confidential data
・Support users around the globe
<Current Product Phase/Exciting Challenges/Opportunities>
Since its early days, LINE has used Hadoop at a large scale. As a result, while having accumulated knowledge on the platform, we have faced system obsolescence issues over time. To address these challenges, our team has made meaningful contributions by improving, newly developing, and migrating systems to reflect the latest needs and service sizes.
Hadoop ecosystem - HDFS, YARN, Hive, Presto, Spark, HBase
Security tools - Kerberos, LDAP, Ranger, Atlas
Operating/monitoring tools - Ansible, Grafana, Prometheus + Promgen, imon (internal monitoring tools)
Development environments - IntelliJ, Github, Jenkins
Development languages - Java, Python, Golang
- Strong interest in the Hadoop ecosystem
- Ability to identify the true needs of various users
- Mindset geared toward proactively solving issues
- Bachelor's, master's, or doctoral degree in Computer Science or Informatics, or equivalent experience
- Experience developing and operating a large-scale distributed cluster of over 1,000 servers
- Experience developing and operating the Hadoop ecosystem (HDFS, YARN, Hive, Presto, Spark, HBase)
- Experience operating large-scale data in a cloud environment, such as AWS and GCP
- Experience operating large-scale servers using Infrastructure as Code tools
- Understanding of, and experience tuning internal architecture of JVMs and GCs
- Experience with development and system operation in Linux and/or Unix environment(s)
- Knowledge of, and experience with monitoring tools, such as Elastic Stack, Prometheus, and Grafana
- Proactively identifies issues and offers meaningful improvement plans
- given user needs and business priorities
- Flexible in meeting diverse needs of various markets
Location: Tokyo, JAPAN
Shinjuku Office /JR SHINJUKU MIRAINA TOWER 23rd FL.,4-1-6 Shinjuku,Shinjuku-ku,Tokyo,160-0022
One of the followings will be applied: Discretionary labor system for professional work (Employee is deemed to have worked for 9.5 hours a day, regardless of the actual number of hours worked), Flex-time system (core time: 11:00 am–4:00 pm) or 10:00 am–6:30 pm（actual working hours 7 hr 30 min)
*To be determined after the interview process
Weekends (Saturdays and Sundays), national holidays, paid leave, New Year’s holiday, congratulatory and condolence leave, "Refreshment" leave (every 5 years, employees who have been employed under a continuous contract are entitled to 10 days of paid leave)
Annual salary system (To be determined based on skills, experiences and abilities after discussions)
- Annual compensation will be divided into 12 months and paid on a monthly basis.
- Separate incentives available (*1)
- Compensation revision: twice a year
- Allowances: commuting allowance, LINE Pay Card Benefit Plan (*2)
(*1) In addition to your annual compensation, you may receive incentives (twice a year) depending on the company's and individual performance and evaluation on your performance. (Incentives are not guaranteed to be provided. An incentive payment will only be paid if you remain employed as of the payment date.
(*2) This is an allowance separate from the salary meant for employees to use for their health, personal development, support for raising the next generation, and more.
Employment insurance, workers accident compensation insurance, health insurance, employees pension insurance
- Periodic health checkup
- Company events and others
Details to be shared during interviews.