New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Thats a lot of non-functing hardware
Is that the old refubished crap you are buying?
Impossible to have such high failure rate on new equipment
If it’s cheap enough, I’ll buy this
For now I use in order of spend/month:
OVH (hypervisor and tons of ips, for company usage)
Avoro (vps for personal projects that might turn to be company projects if good enough)
Hetzner (virtfusion for client control of the OVH hypervisor)
HostHatch (backup server for everything)
My point is, I only use established providers. For company prod. usage, my only critical provider is OVH really
BUT, even for someone like me, scared to trust basically all hosts, a very cheap VPS for backups of backups makes sense
I really hope @FlorinMarian realize that $5 a TB is standard pricing here
About $3 / TB is possible on LET and it’ll be hosted in a datacenter
So you’re kind of forced to provide something affordable even when comparing to LET offers for it to actually make sense buying
He’s building a relatively big cluster of resources in a room in his (parents?) house
Do you really think doing this on brand new branded servers for $7000 each makes sense?
You can criticize our boy Florin for a lot, but putting brand new enterprise grade branded hardware in a closet and having the prices reflect that would never work
Defective hardware paired with massive incompetence. What could go wrong...
Building lowend “datacenter” in parents house with bunch of old used hardware… what could possibly go wrong?
If you buy used HW at least get brand-new SSDs.
Try the Dunkin nitro cold brew with the sweet cold foam, it’s 🔥
WTF is this?
No biggie, just something brought to you by a boy in red shirt riding a yellow bicycle.
What is up with americans calling everything but coffee coffee
It's coffee, but with nitrogen (you cannot order a large size), it's 🔥
Somehow i wish you best of luck, because i would never have the balls that you have. So at least props for this.
It look‘s like your knowledge is less than mine in terms of networking and virtualization and you already take money from clients, while i still wouldn‘t do it with that setup. Sure, you don‘t advertise to business clients, but i don‘t see redundacy anywhere besides the two ISPs. No redundancy at switch level, some servers without multiple power supplies, i‘m also not sure if the climate is right for that kind of equipment. I don‘t see spare hosts, if your main ones go boom etc. I really would think about least some redundancy, even in a low budget home-styled dc, if you want to make money.
I’m a sysadmin and also manage two vSphere Clusters with 8 Nodes each, redundant PureStorage storage connected trough iSCSI, with 24 hours of multiple ups systems - at two different sites (company „DC“) for a company and different ISPs and still think the redundancy is not enough…and thats for a 500-people company…
Also don‘t you have some kind of hardware firewall in place, like a simple Fortigate?
Related to redundancy:
As for the entire infrastructure, I intend to buy a new switch for both redundancy and performance to have more SFP+ ports.
If the Storage or Dell server had problems, I would migrate the services to the new cluster with redundant sources and 4 independent nodes. (Other issue than PSU fail)
I don't think that I will purchase redundancy for the two servers, but rather I will replace them with other servers that, yes, will have redundant PSUs.
Thank you for the feedback!
Hello!
Another week has passed in which several things have happened.
1. ISP #2 has not yet installed the 500Mbps guaranteed optical fiber or 1Gbps best-effort that would have allowed us to start selling our services, and when I contacted them they answered stupidly "what, it's over somehow the 30 days in the contract, during which we have the obligation to do your assembly?". I'm still waiting, considering that after 2 weeks of signing they realized that it wasn't a correct field filled in and they made me sign a new contract, it seems that they have enough rudeness to postpone this installation for another 6 weeks.
2. Although the number of services had halved, the current consumption of the Storage server was as high as in the old DC (~0.45KWh), I started to research and discovered that although in the room I had a constant temperature of 21 degrees Celsius, the Storage server read the ambient temperature of 26 degrees Celsius and that's why we rearranged the servers in the rack as you can see in the picture and now the ambient temperature reflects the reality for all servers.
3. Because the Storage server had moments when it ran the fans at maximum capacity (22K rpm), it made an unbearable noise and I had moved all the KVM HDD servers to SSDs and I was ready to sell my server on OLX precisely because of the noise but however, before doing this, I proposed to try TrueNAS on it and possibly make a share with the other servers so that I can continue to use it under the conditions in which I had just upgraded it (10x 6TB 12Gbps and 4x Intel S3510 1.2 TB). I was very surprised to see that the problems related to energy consumption and noise had as their source the Debian OS. Having the same hardware, an even more intense activity of writing/reading, the fans stay at 12K rpm and a consumption of 0.22KWh, that is half. This is how we came to know under what conditions we will offer the Storage servers when we put them up for sale. We have a cluster with the 10x 6TB 12Gbps HDDs in RAIDZ-3 (resists 3/10 lost HDDs, data is lost starting with the 4th) accompanied by a RAID1 Intel S3510 SSD cache for a good speed of writing, not just reading as offered by the RAIDZ-3 configuration.
4. Since we also have 2x 4TB HDDs in addition to the 10 in the Storage server, we have created a RAID0 pool with the help of which we will offer free 3 daily backups for all SSD servers and clients will be able to see them in the graphic interface and to restore them if they need it.
I wish you all a pleasant weekend!
Tldr: how all of this will benefit the client?
Hey!
1. they can convince themselves that they have a provider that prefers not to sell when there are problems, unlike others whose servers die one by one but come with all kinds of new and tempting offers. (the gain is trust)
2. The fact that you receive and have access to 3 backups of your server at no additional cost I think is a big deal.
I don't quite understand why you went with a CL3100 in the first place.
It's designed for extremely high density, which comes at the cost of expandability and, of course noise.
Considering you already have a full rack, I doubt that you benefit from the higher density.
If you cram 12 3.5" HDDs into a 1U chassis, the fans have to ramp up to create enough static pressure to cool the HDDs.
A regular 2U 12 bay chassis would most likely have been the better choice.
Hey!
In the old data center, we had to pay 12 euros per month for each extra 1U. In those conditions, it was the greatest joy when I found out that I can have so many SASs in a 1U case.
Is it possible to collocate?
I don't know if the question is just ironic, but definitely not. This first rack will be filled only with our equipment because it is the only option in which we can be sufficiently profitable.
As seen in some of the photos behind the rack, we have a thick bundle of optical fiber precisely because we are also prepared for the scenario in which we will have to break two walls and go from 6 square meters to 40 square meters, space in which we can be profitable with other services.
Let's avoid unsalted jokes, the house has 153 square meters on the ground floor and the upper floor is currently uninhabited, so it wouldn't be much to do to change the destination of the ground floor as long as it had a well-defined purpose behind it.
I actually could do with more unsalted jokes, watching my salt intake at the moment
Sure, if the charges are more you can always start your own DC. Florin will help with free consulting and direct supervision.
If you are still waiting for RCS, just cancel them and talk to the guys from ines, I'm sure they will have a good deal with better network quality.
RCS is ok for residential use, nice in Bucharest on 10gbps and more, but crap on your area.
Nice job so far!
Good luck
.
Thank you for your feedback!
Actually I’m waiting for Orange. RCS will be our backup of we mind the guaranteed bandwidth. (150Mbps RCS, 500Mbps Orange)
I've cleaned up this thread
@jmaxwell said:
It should have been ironic, but I was paying 200 EUR for two servers and at home, having so much to learn and optimize, I ended up with a monthly cost of 450 EUR (having the guarantee that the net has a minimum of 650Mbps guaranteed, in DC- in the old one, there was no guaranteed bandwidth and we only had 5TB included per month per server, each extra TB costs 3.6 EUR).
In the data center, the introduction of a node that consumes 110Wh would have cost me 84 EUR and at home 24 EUR (considering that I only pay extra for electricity, the AC and the Switch consume about the same amount).
In the data center it costs 0.31 EUR for each KWh consumed and at home the price is half in summer and probably 70% in winter.
Maybe I seem too optimistic again, but consider that I currently have 6 servers 24/24 of which 3 are intensively used and 3 are idle and the power consumption is below 2KWh, including the cooling and the switch.
Yeah, it's cheaper hosting at home.
After investing 15K EUR, yes - it is.
Hombre, are you really using KVM ( ATEN )? what happened to iDRAC?
Happy to see you have a Gen 13 Dell.
The 2.5 Inch 2 bay unit is it a shelf or server?
If you need shelf, I have 2 for sale, 1 DELL 1220 and 1 NETAPP, both with dual SAS Controllers @ 6GBPS, The shelf's work with SATA drives also.
If in need of 10/25/40/100GBPS DAC/AOC/SFP+ contact me, I will give you a nice % off.
where was this? we actually pay 30 now, LOL!