New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Share your experience and expertise with iwstack.com
We are leaning towards iwstack.com for our cloud service. I did experimented once with 30 Euros, user interface is completely messy, it was OK.
Can you post your experience with iwstack other popular cloud services like AWS or Rackspace cloud?
Our plan is to use 1 Virtual Router => 2-3 instances of App server (1 GB Each) and 2 instance of 2 GB Database Server.
Does iwstack provide option to having shared disk/drive to store the documents so that they can be served to clients from multiple app servers (VPSes)?
Thanked by 1Maounique
Comments
Hello!
You can set your own storage server behind the virtual router, same as you can put a database server on another instance, etc. Then you can connect using the internal network to it and you do not expose any port to the internet.
Alternatively, you can setup load balancing, for example, with a few webservers and or reverse proxies using the feature in the virtual router. You can really have a lan behind the virtual router and let those VMs communicate among them on the internal shielded networking.
Apart from the interface, everything has been mostly superb for me. I've been using iwstack for four months now. The only issue with service appears to be on DDOS protected IPs. I've had a period of large packet loss (many hours) and a 40 minute downtime as a result of issues with SeFlow.
There doesn't appear to be any SLA as I didn't receive any credits or otherwise for the two periods.
Other than that, the service is good. Staff are friendly and helpful.
maybe our expert @ErawanArifNugroho that have many experience with them can help you, @Maounique seems already explain some of it
I was just about to post about this, thanks @iSky
One of my instance at iwstack, were created since the launching of iwStack. It runs with 512MB and with HA, so it will never down.
There's a time when the node needs to be upgraded and rebooted, but the instances not affected. Not like another cloud provider which still caused downtime when rebooting/applying patch.
We can have our own lan, private neworking, and make each user to connect to it via vpn
We can also make a backup of the disk to run another same vm with the same configuration. Or, add another disk to alocate it to different user.
Maounique have explained it better than me
I have been with them since iwstack beginning, and I am very happy:
Very good prices and good performance (they are not the cheapest, but the cheapest with good quality)
Very good service (I have opened several tickets and they have always solved them) They are also very friendly.
And also I love to upload my own isos and download my templates when I do not need them and to make backups.
They are really one of the best host out there.
@vyala, I highly recommend iwStack.
Since I perform my own external loadbalancing and failover, I don't use the Virtual Routing, etc. But, out of over 30+ providers, Prometeus/iwStack is one of a very short list that I trust enough so far to have multiple instances with.
The uptime/stability is unmatched and the disk I/O is nice. it is super convenient to be able to create/destroy on an as needed basis (for high demand situations or for an adhoc project, for example) without any "help" needed by support ticket or staff. The UI may not be the best, but I have no issues since I don't create/destroy often and the service just runs - I just set it and forget it.
Good luck
Hello!
Thank you very much for the kind words
The UI is clunky, agreed, but do not forget about the API You can automate it in no time:
http://www.iwstack.com/tutorials/api-with-cloudmonkey/
The API also offers more power than the UI.
@Maounique, I am not so bothered by the iwStack UI (once I figured it out). The reality is I use it so infrequently.
What's key for me is that the service is stable - practically LEB prices, with "High End Box" service. For me stability trumps fancy features and Prometeus/iwStack's middle name is Stability.
This was a non-paid endorsement ;-)
I like iwStack, it's a nice service that's different from pretty much anything out there, and it's very reasonably priced for what you get.
My only issue (and the reason I stopped using it and have credit sitting around) is that the VM's sometimes take too long to spin up, which caused me problems a few times. Probably not an issue for most, but I bought credits because I need VM's for 10-15 minutes at a time, but I need them straight away, waiting 30 minutes or longer isn't an option, so since I can't rely on the timings, I had to go elsewhere.
Can't fault the service once the VM's are up, or the functionality of the panel (although the design could be improved and made clearer IMO).
Ordered a service to act as a secured SSH endpoint. So far 100% uptime as far as I can tell (not monitored, never seen it offline).
Cheap, easy, reliable.
Panel UI is a bit clunky and difficult, could do with some work. But thats being picky.
For something costing €1.8/month I have no complaints.
€ 2.16
Basic 384M + 10 GB for OS
I'll give try next month.
Ditto.
Service is OK but the UI is godawful and I still can't exactly pin down how to make sure virtio is used
Great to know a lot of experience, I would go with iwstack.
@Maounique,
how long it does for ping to Rackspace UK? We want to keep the files in the rackspace cloud files (images, pdf, json stuffs)?
OR does iwstack.com offer secure file storage for clusters?
Thanks All, I was late from work all comments are really helpful.
@vyala:
root@deb7-32Bit:~# ping 5.79.57.1 PING 5.79.57.1 (5.79.57.1) 56(84) bytes of data. 64 bytes from 5.79.57.1: icmp_req=1 ttl=240 time=24.6 ms 64 bytes from 5.79.57.1: icmp_req=2 ttl=240 time=24.7 ms 64 bytes from 5.79.57.1: icmp_req=3 ttl=240 time=24.6 ms 64 bytes from 5.79.57.1: icmp_req=4 ttl=240 time=25.7 ms ^C --- 5.79.57.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 24.665/24.951/25.711/0.439 ms
I like the interface, too! I think, compared to the low priced clouds here (e.g. DO etc.) has much more choices and if you learn it, it can really be useful. About the service now: I use it mostly not for web hosting but for streaming services (video or icecast), some of them permanently and some others, occasionally. It is very stable, with good connectivity in Greece that I live and very very very cheap, compared with what Incle and Mao are offering! Kudos!
The characteristics are inherited.
Virtio is used by default in most OSes so, when you link an ISO, it is probably with virtio if you specify the correct OS. If unsure, just select other PV 64. It will be virtio for sure and this works for windows too:
http://board.prometeus.net/viewtopic.php?f=8&t=1372&sid=75c5700e503508892f783c09d8eb1397
Also, if you wish to download the disk later, please make sure you specify at ISO link that it is extractable, that is also inherited. All VMs derived from it will inherit it, including snapshots, templates and the VMs made from those templates.
Please note the 384 and 512 MB instances are limited to 100 mbps and 300 mbps respectively, so they are not spinned for a quick DDoS or spam box, we do monitor all flows automatically and get alerts about outgoing and incoming attacks, high smtp activity, etc to check things up.
You can make custom disk as low as 1 GB if you only need ssh. It is way under 2 Eur then.
€ 1.512
Practically the IP cost. If you shut it down, you really only pay 1.07 Eur a month so the initial 30 Eur will last you 2 years and a half.
Really great service, I am with them since the launch and very happy with it.
Fantastic service that only gets better. I like the new option to add multiple nics and have instances connected to multiple networks. Also the low-cost storage is great. Both were very useful for me.
Unfortunately, the upgrade, while solved some bugs and allowed for new features, introduced other bugs, so we are now running by cron garbage collectors to kick off failing to start instances that happen in some particular conditions (adding more than one type of storage at creation, for example, being the main cause). Also billing is still missing things, however, it is a living product, will grow and have issues, we try to be on top of that, actually, Uncle does, mostly
Out of curiosity, is the HDD space scalable?
Do I have to create a whole new instance with a snapshot or can I just adjust the HDD size and reboot?
What parts of iwstack is oversold?
How do you determine when to scale the app servers in the face of such penalizing terms regarding single instance usage?
Never really thought such hosts (99% of LEB) were fit for horizontal scaling (which is fine because 99% of customers dont require horizontal scaling), when planning horizontal scaling you generally want to be at or above certain capacity of each node (depending on how you distribute requests) - or you rather more practically you want to determine a low accpetable bound of a certain metrie, say 99.999% of requests are processed in 0.1 secs time on GET against /api/action , before deploying additional nodes, and generally reaching such will be stressing the instance.
However with typical restrictions clauses like below, it completely goes again common practices with regards to AWS EC2, Azure, Linode, Rackspace Cloud scaling.
Prometeus in its sole discretion may discontinue service without any notice to any Hostiing, VPS or iwStack customer that
uses a high amount of server resources (such as, but not limited to, CPU Time, Memory Usage, and Network Resources)
Even memory usage is noted as a restrction
Is iwstack hardware all located in Italy? Seem to recall prometeus moving into the US - Dallas? Is there an IP I can ping/traceroute (Italy and/or US)?
Forgive my ignorance - I hadn't really looked at this until now
@Maounique, I have installed windows 2008 with windows 2008 os type. I have just tested it with pv 64 bit and it seems that the hard disk is going so much faster!!!
Thanks for the info
You can always add disks (up to 6 per instance), you do not need to power down for adding, but you need to powerdown for growing data disks.
If you recreate an instance from snapshot/template, the root disk will have the same size of the original, so it will not work. I suppose you can download the image and grow it,t hen put it back, but it is way too much work. The key is to keep a minimal root disk and put all data on, well..., data disks which you can attach and detach at will.
Theoretically the CPU and traffic are oversold. This means that we bet that it will not happen that everyone will use full cpu at the same time (even then, the system with share the load fairly so a minimal capacity will be maintained), practice shows our cpu oversold servers (more vcpu than the actual threads) 5-6 times stay at 10-30 %.
At present only one node is hovering above 50% where we move abusers so even they have full resources, all the others are under 20%.
Traffic is oversold but we are using some 10-20% of the capacity and we have lines with non-commit to take up the tops of attacks because only once or twice happened that legit traffic to spike higher than the capacity and it happened because of very popular shows in the streaming business, something not even the producers hoped. The extra lines helped then.
Oh, and IOPS/bw for the SAN are oversold, we are betting not everyone will do dd "tests" AND run intensive DB apps in the same time, however, we CAN scale it fast if needed as we have another SAN brand new which can take up some flak, 10 times more expensive than the current 150k one. But chances for that are 0.
For the second time you attack without thinking. What am I, target practice? Those terms are in all ToSes of anyone who is in business long enough to weed out people bent on doing harm.
Here is the version for people which do not intend to harm the system:
http://board.prometeus.net/viewtopic.php?f=8&t=1390&sid=4fac669ee627179cf6108b167c969d9c
Not all but most. If by move you mean expanding, yes, we already expanded, however, there is only a lesser version there, no SAN, so no failover, HA still on, though, and storage is SSD. No virtual router so no fancy internal routing or load balancing, but you do have external firewall (which is on, by default, you need to punch holes through it for whatever you need accessing BOTH ways).
http://board.prometeus.net/viewtopic.php?f=8&t=1178
This is why I wrote those tutorials, we see how people behave and try to plug the most "used" pitfalls.
@Maounique,
Basically the restriction are kind of scary, lets figure out some cases.
Do you ever find any of the above apps are categorized or can be flagged as abusive?
Thanks, I will check all of them. I was looking at tutorials, but I missed this one.
I have compared them and with virtio it is between 2 and 4 times faster !!!
There is no problem with memory on KVM, unless you plan on using swap as extra ram. On OVZ it is the problem with using vswap permanently if that gets filled, there are issues on the node with iowait and OOM killer, this is how ovz works if too many people are using their vswap to the fulles or in some special circumstances, the kernel gets bogged down.
We consider abuse permanent access of more than 5 MB/s or more than 10 MB/s for a few hours. At 20-30 more than an hour, it is probably swap used as RAM. If you need more than that, you probably need a dedicated.
Please see above with the disk issue.
No problem, we allow a load of maximum the number of cores, however we do not suspend you for going above for days, just move you with other abusers.
See above.
No problem
We do not have a list of abusive apps, other than those listed in ToS/AUP.
Basically stay below 5 MB/s and a few hundred of iops, this serves most of general purposes, if you need more, should consider either SSD for iops or a dedi with raid for continuous disk hammering. Bursts are allowed, we tolerate up to 40 MB/s and 1k iops for a few minutes/half an hour.
Stay below 10 k pps with bursts allowed up to 40-50k for a few minutes and 300 mbps continuous transfer with bursts allowed up to 600 mbps depending on instance ram.
Stay below maximum load (number of cores) with bursts allowed for a few minutes/half an hour.
Compared with most other providers, our limits are more than generous on all chapters.
Anyway, before suspension you have ample time to solve the issues, we warn, move, limit and shutdown the instance before suspending the account. We didnt suspend anyone yet, except for illegal activities (DoS, spam, phishing, scams, scans) and even those could be counted on the fingers of one hand, even miners were moved (with no downtime) to the "sacrificed" node to hinder each other and they got the message, no suspension was needed, we currently have no miners left.
At the moment there are some 400 active accounts with some 1000+ instances, some people pay tens of euros a day for complex setups and intensive apps, except for the miners, we didnt have to move anyone, load is very low, there is still capacity left for 3 times more usage, we can add more pods and disk at any time, right now redundancy is some N+5 or 6 this is not the cheap setup you expect, only the SAN costs more than all the gear other people rent not to mention the second SAN that is available in case of need, the FC fabric and all that.
Please quit thinking in terms of LEB resources just because the price is LEB compatible. We have vmware/rhev6 clouds for years sold to corporations which pay thousands a month for much lower resources and older gear, now this kind of OS setup is offered there too and it seems it is appreciated by the people not forced into vmware or something due to upper management orders or some certifications needed.
Is bandwidth flexible? I only read 1.5GB per hour.