Every day, hundreds of millions of messages are sent over LINE. A combination of thousands of servers and distributed storage middleware (Redis Cluster, HBase, Kafka, etc.) makes this possible—working together to process millions of queries per second and handle several petabytes of data. But a service the size of LINE often encounters problems: bugs that are a world-first, or performance issues that arise only in unique circumstances among others.
Our team is looking for a distributed storage reliability engineer who can pitch in to enhance the reliability, availability, and maintainability of our distributed storage layers—the backbone of the LINE app—by quickly finding, investigating, and solving problems like these.
- Develop a strong understanding of distributed storage middleware and JVM's characteristics, and perform tunings and code edits essential for improving performance
- Use Prometheus, Elastic Stack, Grafana, etc., to build monitoring environments and make ongoing improvements to HBase, Redis Cluster, and Kafka
- Use tools such as Ansible and Chef to automate and streamline LINE's distributed storage middleware
- Develop systems that efficiently maintain data, enabling recovery during large-scale disasters or other emergencies
- Major in Computer Science or a related field, or equivalent work experience
- Experience developing in Linux/Unix environments and system operations
- Proficient in developing with languages that run on JVM, such as Java, Scala, Kotlin, and Clojure
- Strong interest in distributed storage middleware such as HBase, Redis Cluster, and Kafka
- Ability to independently pinpoint and solve issues
- Masters or PhD in Computer Science or a related field
- Experience using HBase, Kafka, Redis, etc., in developing and operating distributed systems for large volumes of data and traffic
- Knowledge of and experience in monitoring tools such as Elastic Stack, Prometheus, and Grafana
- Experience using Ansible, Chef, or other provisioning tools to operate large server groups
- Experience developing concurrent and multi-threaded systems
- Experience in developing database systems such as RDBMS and KVS
- Understanding of and experience tuning JVM internal architecture and garbage collection
- Experience developing and operating large-scale consumer services
- Japanese (working proficiency) and English (reading and writing)
Location: Tokyo, JAPAN
Shinjuku Office /JR SHINJUKU MIRAINA TOWER 23rd FL.,4-1-6 Shinjuku,Shinjuku-ku,Tokyo,160-0022
One of the followings will be applied: Discretionary labor system for professional work (Employee is deemed to have worked for 9.5 hours a day, regardless of the actual number of hours worked), Flex-time system (core time: 11:00 am–4:00 pm) or 10:00 am–6:30 pm（actual working hours 7 hr 30 min)
*To be determined after the interview process
Weekends (Saturdays and Sundays), national holidays, paid leave, New Year’s holiday, congratulatory and condolence leave, "Refreshment" leave (every 5 years, employees who have been employed under a continuous contract are entitled to 10 days of paid leave)
Annual salary system (To be determined based on skills, experiences and abilities after discussions)
- Annual compensation will be divided into 12 months and paid on a monthly basis.
- Separate incentives available (*1)
- Compensation revision: twice a year
- Allowances: commuting allowance, LINE Pay Card Benefit Plan (*2)
(*1) In addition to your annual compensation, you may receive incentives (twice a year) depending on the company's and individual performance and evaluation on your performance. (Incentives are not guaranteed to be provided. An incentive payment will only be paid if you remain employed as of the payment date.
(*2) This is an allowance separate from the salary meant for employees to use for their health, personal development, support for raising the next generation, and more.
Employment insurance, workers accident compensation insurance, health insurance, employees pension insurance
- Periodic health checkup
- Company events and others
Details to be shared during interviews.