my asp.net site runs farm of windows ec2 web servers. due recent traffic surge, switched spot instances control costs. spot instances created ami when hourly rates below set rate. web servers not store data, creating , terminating them on fly not issue. far website has been running fine.
the problem deploying updates. application updated days.
before switch spot fleet, updates deployed follows (1) ci server build , deploy site staging server (2) staggered deployment web farm using simple xcopy of files mapped drives.
after switching spot instances, process is: (1) {no change} (2) deploy update 1 of spot instances (3) create new ami deployment (4) request new spot fleet using new ami (5) terminate old spot fleet. (the ami used spot request cannot changed.)
is there way simplify process enabling nodes either self-configure or use shared drive (as microsoft azure does)? site running umbraco cms, supports multiple instances physical location, ran security errors trying run .net application network share.
bonus question: how can auto-add new spot instances load balancer? presumably if there script fetched latest version of application, add instance load balancer when done.
i have similar setup (except don't use spot instances , have linux machines), here general idea:
- ci creates latest.package.zip , uploads designated s3 bucket
- ci sequentially triggers update script on current live instances downloads latest package s3 , installs/restarts service
- new instances launched in autoscaling group, attached load balancer, iam role allow access s3 bucket , user data script trigger update script on initial boot.
this should doable windows spot instances think.
Comments
Post a Comment