bg

Lotte Hotel Co., Ltd. Lotte Duty Free

Back

Key Takeaway

Lotte Duty Free Online Infrastructure Operated Stably for 7 Consecutive Years

Operating AWS-based online infrastructure stably for 7 consecutive years, we strengthened the competitiveness of online duty-free operations through continuous cost optimization via generative AI-based service application and FinOps collaboration.

Lotte Hotel Co., Ltd. Lotte Duty Free

Client :Lotte Hotel Co., Ltd. Lotte Duty Free

Industry :Retail

1. Overview (Project Background)

 

ViewRun Technology launched a new SaaS-type LiDAR AI Platform VueX in the global market this year and internally built a cloud-based architecture to operate it stably and scalably.

Since VueX is a platform that requires high-performance GPU workloads such as large-scale LiDAR data processing, Auto-labeling, and model training, we technically adopted a Provisioning Ensemble structure (combination of Terraform, AWS CDK, and Karpenter) to meet these requirements and implemented it on the AWS environment.

 

MegazoneCloud participated in the process of verifying whether the architecture configuration of VueX after its launch conforms to AWS best practices and checking consistency, performing a role in reviewing structural stability.

 

After VueX's launch and listing on AWS Marketplace, full-scale global market expansion was necessary, and in particular, securing product promotion channels in the Middle East region, where interest in mobility and deep tech industries is high, was important.

In this process, through the 「2025 AX (Artificial Intelligence Transformation) Solution SME Overseas Expansion Support」 Middle East program led by MegazoneCloud, ViewRun was provided with an opportunity to participate in the ADIPEC 2025 joint pavilion, allowing them to introduce and promote VueX to the local market.

 


 

2. Challenge (Problem Definition)

 

1. Ensuring Standardization and Consistency Between Infrastructure and Service Layers

  • As base infrastructure (VPC, EKS, RDS, etc.) and application layers (API, data pipelines) change and expand at different speeds, it is difficult to satisfy both stability and agility with a single IaC alone.

  • Maintaining deployment reproducibility and configuration consistency across multiple environments (dev/stage/prod) is challenging.

 

2. Unpredictable Resource Demand Based on LiDAR and AI Workload Characteristics

  • GPU and CPU mixed workloads such as large-scale LiDAR uploads, Auto-labeling, and model training spike suddenly, making it difficult to respond with only existing fixed-node configurations.

  • When operating multi-tenant SaaS, usage gaps between customers increase, requiring real-time scaling and cost control.

 

3. Need to Meet Scalability, Cost, and Stability Standards for Global SaaS Operations

  • GPU nodes have a high-cost structure → risk of operating costs skyrocketing if inefficient scaling occurs

  • Since we target global OEM and Tier-1 customers, we must meet high operational standards such as uptime, security, and scalability, and infrastructure instability directly leads to decreased customer trust.

 


 

3. Solution (Resolution)

 

ViewRun Technology and MegazoneCloud configured a Provisioning Ensemble structure based on AWS for stable supply of VueX and built a SaaS operating system that simultaneously satisfies stability and scalability.

 
 

Component

Key Implementation Details

AWS Infra

We configured VueX's base infrastructure including VPC, Subnet, EKS, and RDS on the AWS environment and designed it to meet the stability and security standards required for global SaaS.
We ensured standardization and consistency of the infrastructure layer to enable multi-environment (dev/stage/prod) operations.

Terraforming

We codified immutable infrastructure (accounts, networks, clusters, etc.) with Terraform to establish a highly reproducible deployment system.
By automating environment-specific configurations, we reduced infrastructure change risks and strengthened operational efficiency.

AWS CDK

We structured VueX's service functions (API Gateway, Lambda, S3, data pipelines, etc.) based on CDK,
implementing an application layer that allows development teams to quickly add and deploy features. Code-based management significantly improved feature release speed.

Karpenter

For workloads requiring high-performance GPUs such as Auto-labeling and model training, we implemented real-time automatic scaling with Karpenter.
By automatically selecting optimal instances based on Pod requirements, we reduced GPU costs by approximately 30-45% and secured stable processing performance even during traffic spikes.

Monitoring & Cost Optimization

We built a dashboard for real-time monitoring of key metrics such as EKS, GPU nodes, Auto-labeling, and model training,
and applied cost optimization policies based on resource usage to improve operational transparency and efficiency.

 

 


 

4. Result (Achievements)

 

  • GPU-based Auto-labeling and model training performance has been optimized, and processing speed for the same task has been significantly improved compared to before.

  • With Karpenter-based real-time scaling, GPU costs have been reduced by approximately 30-45%, and operational efficiency has been greatly improved.

  • By establishing a standardized deployment system based on Terraform and CDK, deployment stability and reproducibility across environments (dev/stage/prod) have been greatly improved.

  • By securing AWS Marketplace SaaS architecture consistency, trust with global OEM and Tier-1 customers has been strengthened, and onboarding of overseas customers has become smoother.

Related

Case Stories

Ready to unlock your data's potential?

Let's build intelligent data solutions that drive real business value through advanced analytics and AI.

ACT ACERTi

ISO/IEC 42001:2023
ISO/IEC 27001:2022

ISO/IEC 27018:2019
ISO/IEC 27017:2015

ISO/IEC 27701:2019
ISO 45001:2018