Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


R-Pi Questions - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

R-Pi Questions

13»

Comments

  • PulsedMediaPulsedMedia Member, Patron Provider

    @jmgcaguicla said:

    @PulsedMedia said:
    It looks like we could offer a limited run at 10€/Month each with 2x2TB 7200rpm HDDs + 1Gig network connection and 2x3TB for 13-15€/month. And go potentially as big as 2x20TB drives on each :)

    Now we're talking

    You up for a preorder, or dozen? ;)

    Once tested and we know the units are coming we'll open a preorder then.

    After the first special batch(es) we will have to raise regular price by a few euros, and will have limited stock to the number of old useless HDDs.

    2x8TB will probably be more along the lines of 35€ a month for example.

  • @PulsedMedia said:

    @jmgcaguicla said:

    @PulsedMedia said:
    It looks like we could offer a limited run at 10€/Month each with 2x2TB 7200rpm HDDs + 1Gig network connection and 2x3TB for 13-15€/month. And go potentially as big as 2x20TB drives on each :)

    Now we're talking

    You up for a preorder, or dozen? ;)

    Once tested and we know the units are coming we'll open a preorder then.

    After the first special batch(es) we will have to raise regular price by a few euros, and will have limited stock to the number of old useless HDDs.

    2x8TB will probably be more along the lines of 35€ a month for example.

    Count me in for the pre-order

    Thanked by 1PulsedMedia
  • @PulsedMedia said:
    You up for a preorder, or dozen? ;)

    Once tested and we know the units are coming we'll open a preorder then.

    Sure, reserve me a slot. Primarily interested in 2x2TB or 2x3TB as you mentioned in the last post.

    I'd happily be a Guinea pig for this new product line, give me the oldest spinning rust you have (I doubt you have drives older than the ones in my KS machines anyway).

  • @ahnlak said:

    @TimboJones said:
    Netbooting adds hassle, latency, delays and additional failure points and still doesn't prevent corruption.

    Well if your complaint is SD corruption, not using the SD kind of prevents it!

    "SD corruption" is just filesystem corruption from things like unexpected power issues and reboots.

    On the cooling front, I've no idea what you're doing to your "idling" Pis, but I don't have any cooling on any of mine that run 24/7 without any problems at all. Maybe my house is cold.

    I'd believe you if you monitored the CPU temperature and your reply said "it hardly|never goes above X temperature" or whether you just let it throttle performance to keep temperature down.

  • @PulsedMedia said:
    ordered an Odroid HC4 for testing, along with XU4Q and MC1 :)

    Yeah I'm a big fan on HC4. The ones I have deployed have good uptime. Performance is good.

    Frankly the XUQ/MC1 are pretty long-in-the-tooth at this point.

  • edited January 2022

    @lanefu said:

    Yeah I'm a big fan on HC4. The ones I have deployed have good uptime. Performance is good.

    What's the power consumption of the HC4 with two drives?

  • nvmenvme Member
    edited January 2022

    @PulsedMedia said:

    @jmgcaguicla said:

    @PulsedMedia said:
    It looks like we could offer a limited run at 10€/Month each with 2x2TB 7200rpm HDDs + 1Gig network connection and 2x3TB for 13-15€/month. And go potentially as big as 2x20TB drives on each :)

    Now we're talking

    You up for a preorder, or dozen? ;)

    Once tested and we know the units are coming we'll open a preorder then.

    After the first special batch(es) we will have to raise regular price by a few euros, and will have limited stock to the number of old useless HDDs.

    2x8TB will probably be more along the lines of 35€ a month for example.

    I am up for pre-order. May be these can be connected via internal network with seedbox? ;)

    Thanked by 1PulsedMedia
  • @TimboJones said:

    @ahnlak said:

    @TimboJones said:
    Netbooting adds hassle, latency, delays and additional failure points and still doesn't prevent corruption.

    Well if your complaint is SD corruption, not using the SD kind of prevents it!

    "SD corruption" is just filesystem corruption from things like unexpected power issues and reboots.

    Then you have some other issues somewhere; I've been running Pis since they launched, and while I've certainly had a few SD cards shove their heads up their asses, I have yet to come across general filesystem corruption. If there was something magical about Pi hardware that caused what is essentially just Debian to randomly corrupt filesystems, I'm sure there'd be more talk about it.

    Now if you want to complain about the crappy Wifi on them...!

    On the cooling front, I've no idea what you're doing to your "idling" Pis, but I don't have any cooling on any of mine that run 24/7 without any problems at all. Maybe my house is cold.

    I'd believe you if you monitored the CPU temperature and your reply said "it hardly|never goes above X temperature" or whether you just let it throttle performance to keep temperature down.

    I've never bothered with continuous monitoring, but I've rarely seen temps bounce above 50C, especially idling. Have you got them in some restrictive cases or something?

    It's the "throttling when idling" bit that confuses me; an idling Pi just doesn't do enough to get up into throttling temperatures, so either you've got a load of stuff running that you don't know about, or you keep them on top of a radiator or something :smiley:

  • @ahnlak said:

    On the cooling front, I've no idea what you're doing to your "idling" Pis, but I don't have any cooling on any of mine that run 24/7 without any problems at all. Maybe my house is cold.

    I'd believe you if you monitored the CPU temperature and your reply said "it hardly|never goes above X temperature" or whether you just let it throttle performance to keep temperature down.

    I've never bothered with continuous monitoring, but I've rarely seen temps bounce above 50C, especially idling. Have you got them in some restrictive cases or something?

    It's the "throttling when idling" bit that confuses me; an idling Pi just doesn't do enough to get up into throttling temperatures, so either you've got a load of stuff running that you don't know about, or you keep them on top of a radiator or something :smiley:

    I have a 4 running some fairly basic tasks (without active cooling at the moment) and only see it go between ~30-50C

    root@rpi:~# uptime
     10:08:35 up 1 day, 13:48,  3 users,  load average: 0.47, 0.38, 0.37
    root@rpi:~# vcgencmd measure_temp
    temp=33.6'C
    
  • lanefulanefu Member
    edited January 2022

    @rajprakash said:

    @lanefu said:

    Yeah I'm a big fan on HC4. The ones I have deployed have good uptime. Performance is good.

    What's the power consumption of the HC4 with two drives?

    with 2x8tb hgst enterprise drives

    • 48w on spin-up.
    • 15w idle.
    • 21.2W when zeroing both drives and running 7zip cpu benchmark
  • PulsedMediaPulsedMedia Member, Patron Provider

    @jmgcaguicla said: I'd happily be a Guinea pig for this new product line, give me the oldest spinning rust you have (I doubt you have drives older than the ones in my KS machines anyway).

    How old you got there? We got some still available we bought 9 years ago! :O

    @lanefu said: Yeah I'm a big fan on HC4. The ones I have deployed have good uptime. Performance is good.

    Awesome! Do tell more about your experiences with them?

    @rajprakash said: What's the power consumption of the HC4 with two drives?

    just over 15W. So something like 17-18W off the wall.

    @nvme said: I am up for pre-order. May be these can be connected via internal network with seedbox?

    Sorry, QoS between 0.0.0.0/0 and our networks would require hefty switch, see bandwidth explanation above.
    Can't do the same as with seedboxes, not easily anyways :(

    @lanefu said:

    @rajprakash said:

    @lanefu said:

    Yeah I'm a big fan on HC4. The ones I have deployed have good uptime. Performance is good.

    What's the power consumption of the HC4 with two drives?

    with 2x8tb hgst enterprise drives

    • 48w on spin-up.
    • 15w idle.
    • 21.2W when zeroing both drives and running 7zip cpu benchmark

    Oh damn, that made me realize they used the compute low power models 5400rpm SMR slowness on the test on their site not fast 7200rpm :O Tho SMR drives can be decent, the very first release 8TB Archives much faster than these compute drives. With very little write SMR will do all right tho for single user.

    Oh well, always knew we need to measure and test this thoroughly.

  • jmgcaguiclajmgcaguicla Member
    edited January 2022

    @PulsedMedia said:
    How old you got there? We got some still available we bought 9 years ago! :O

    That might literally just spinning rust :lol: Are those still used in the product lines you offer today?

    Closest I have is 7-ish: 9 Power_On_Hours 0x0012 092 092 000 Old_age Always - 60520

  • PulsedMediaPulsedMedia Member, Patron Provider

    @jmgcaguicla said:

    @PulsedMedia said:
    How old you got there? We got some still available we bought 9 years ago! :O

    That might literally just spinning rust :lol: Are those still used in the product lines you offer today?

    Closest I have is 7-ish: 9 Power_On_Hours 0x0012 092 092 000 Old_age Always - 60520

    Pretty sure there is a dedi somewhere running with that age drives :)

    But for these we would re-deploy the ancient drives. At the end of the day, nothing wrong with them. If they survive this long they are "golden samples", who knows how long they keep working but after 3 years sure the AFR increases a bit, but then it seems to go down again.

  • TimboJonesTimboJones Member
    edited January 2022

    @ahnlak said:

    @TimboJones said:

    @ahnlak said:

    @TimboJones said:
    Netbooting adds hassle, latency, delays and additional failure points and still doesn't prevent corruption.

    Well if your complaint is SD corruption, not using the SD kind of prevents it!

    "SD corruption" is just filesystem corruption from things like unexpected power issues and reboots.

    Then you have some other issues somewhere; I've been running Pis since they launched, and while I've certainly had a few SD cards shove their heads up their asses, I have yet to come across general filesystem corruption. If there was something magical about Pi hardware that caused what is essentially just Debian to randomly corrupt filesystems, I'm sure there'd be more talk about it.

    It's a widely known issue and the reason for having pi specific distros with logging to RAM, etc. You're showing your inexperience.

    Now if you want to complain about the crappy Wifi on them...!

    On the cooling front, I've no idea what you're doing to your "idling" Pis, but I don't have any cooling on any of mine that run 24/7 without any problems at all. Maybe my house is cold.

    I'd believe you if you monitored the CPU temperature and your reply said "it hardly|never goes above X temperature" or whether you just let it throttle performance to keep temperature down.

    I've never bothered with continuous monitoring, but I've rarely seen temps bounce above 50C, especially idling. Have you got them in some restrictive cases or something?

    It's the "throttling when idling" bit that confuses me; an idling Pi just doesn't do enough to get up into throttling temperatures, so either you've got a load of stuff running that you don't know about, or you keep them on top of a radiator or something :smiley:

    You should be confused. You keep talking like you barely use your pi and/or idle it and I'm talking about using it like a dedicated server 24/7, with sustained use where CPU is 100% for periods of time, not for seconds per day.

    Anyway, we seem to be talking about two different things, running SBC's in a datacenter for profit and running at home for simplicity. There's a reason providers keep moving to more powerful Epyc's and not low cost SBC's. It's niche and limited use case.

    Thanked by 1lanefu
  • @TimboJones said:

    @ahnlak said:

    @TimboJones said:

    @ahnlak said:

    @TimboJones said:
    Netbooting adds hassle, latency, delays and additional failure points and still doesn't prevent corruption.

    Well if your complaint is SD corruption, not using the SD kind of prevents it!

    "SD corruption" is just filesystem corruption from things like unexpected power issues and reboots.

    Then you have some other issues somewhere; I've been running Pis since they launched, and while I've certainly had a few SD cards shove their heads up their asses, I have yet to come across general filesystem corruption. If there was something magical about Pi hardware that caused what is essentially just Debian to randomly corrupt filesystems, I'm sure there'd be more talk about it.

    It's a widely known issue and the reason for having pi specific distros with logging to RAM, etc. You're showing your inexperience.

    Yes, to protect the SD card; it's not a general filesystem problem.

    On the cooling front, I've no idea what you're doing to your "idling" Pis, but I don't have any cooling on any of mine that run 24/7 without any problems at all. Maybe my house is cold.

    I'd believe you if you monitored the CPU temperature and your reply said "it hardly|never goes above X temperature" or whether you just let it throttle performance to keep temperature down.

    I've never bothered with continuous monitoring, but I've rarely seen temps bounce above 50C, especially idling. Have you got them in some restrictive cases or something?

    It's the "throttling when idling" bit that confuses me; an idling Pi just doesn't do enough to get up into throttling temperatures, so either you've got a load of stuff running that you don't know about, or you keep them on top of a radiator or something :smiley:

    You should be confused. You keep talking like you barely use your pi and/or idle it and I'm talking about using it like a dedicated server 24/7, with sustained use where CPU is 100% for periods of time, not for seconds per day.

    Hang on, I'm talking about them idling because you specifically said "they're constantly at their CPU limit just idling".

    It's kind of hard to have a sensible discussion if you're just going to redefine the starting conditions to suite your argument so let's just say you're right and I'm wrong, I guess? :smile:

  • @ahnlak said:

    @TimboJones said:

    @ahnlak said:

    @TimboJones said:

    @ahnlak said:

    @TimboJones said:
    Netbooting adds hassle, latency, delays and additional failure points and still doesn't prevent corruption.

    Well if your complaint is SD corruption, not using the SD kind of prevents it!

    "SD corruption" is just filesystem corruption from things like unexpected power issues and reboots.

    Then you have some other issues somewhere; I've been running Pis since they launched, and while I've certainly had a few SD cards shove their heads up their asses, I have yet to come across general filesystem corruption. If there was something magical about Pi hardware that caused what is essentially just Debian to randomly corrupt filesystems, I'm sure there'd be more talk about it.

    It's a widely known issue and the reason for having pi specific distros with logging to RAM, etc. You're showing your inexperience.

    Yes, to protect the SD card; it's not a general filesystem problem.

    On the cooling front, I've no idea what you're doing to your "idling" Pis, but I don't have any cooling on any of mine that run 24/7 without any problems at all. Maybe my house is cold.

    I'd believe you if you monitored the CPU temperature and your reply said "it hardly|never goes above X temperature" or whether you just let it throttle performance to keep temperature down.

    I've never bothered with continuous monitoring, but I've rarely seen temps bounce above 50C, especially idling. Have you got them in some restrictive cases or something?

    It's the "throttling when idling" bit that confuses me; an idling Pi just doesn't do enough to get up into throttling temperatures, so either you've got a load of stuff running that you don't know about, or you keep them on top of a radiator or something :smiley:

    You should be confused. You keep talking like you barely use your pi and/or idle it and I'm talking about using it like a dedicated server 24/7, with sustained use where CPU is 100% for periods of time, not for seconds per day.

    Hang on, I'm talking about them idling because you specifically said "they're constantly at their CPU limit just idling".

    It's kind of hard to have a sensible discussion if you're just going to redefine the starting conditions to suite your argument so let's just say you're right and I'm wrong, I guess? :smile:

    Monitor the temperature next time it does a meaningful apt update.

  • @PulsedMedia It would be ace if you offered Pi colo for seedboxes. I'd love to send you the Pi, case (https://shop.inux3d.com/en/home/81-124-terrapi-evo.html), PoE hat if needed, and SSDs, all imaged, IP configured and ready to go.

  • PulsedMediaPulsedMedia Member, Patron Provider
    edited January 2022

    @Quartermaster said:
    @PulsedMedia It would be ace if you offered Pi colo for seedboxes. I'd love to send you the Pi, case (https://shop.inux3d.com/en/home/81-124-terrapi-evo.html), PoE hat if needed, and SSDs, all imaged, IP configured and ready to go.

    Might be difficult, as it's most efficient when everything is exactly the same. Then again i see there is decent prices on those, so might be worth it. Albeit it would cost more than just renting one from us, since these would not be built by us and therefore cannot host them as densely. And density is the issue, our DC is rather small. We are looking for new place, but it takes time to find and build a DC.

    So density drives a lot of our decisions.

    Found this interesting piece tho: https://www.raspberry-pi-geek.com/Archive/2014/03/Colocation-of-Rasp-Pi-servers

    --

    Testing odroid stuff arrives friday! I think i know what i'll be doing this weekend by the looks of it.
    If testing during weekend/early next week goes well we will order the first 24 next week and release the preorder.

    -Aleksi

  • PulsedMediaPulsedMedia Member, Patron Provider

    Interesting products also at: https://shop.allnetchina.cn/search?type=product&q=cluster

    I'm a complete newbie with RPi etc. I don't think i've ever used one before this. Maybe unknowingly, but have not fiddled with the hardware despite owning some ancient RPi and Pine64. Learning this world right now, and kinda feeling like kid again when i used to build amps and led things with pico microcontrollers for fun :)

    Might need community help when working out the quirks with the HC4 which i am hoping gets to reality.

  • PulsedMediaPulsedMedia Member, Patron Provider

    Got the HC4 for today and got to play around with it. Was quite a bit of checking out to get armbian running, and then it would not boot. had to erase petitboot to even try to install it.

    Power consumption was also higher than expected, and not only due to the craptastic PSU from hardkernel which has power factor of 42% (!!!!) -- but actual DC input measured with 2x7200rpm was ~18-19W idle, and peaked during any regular task (ie. installing armbian, running apt) to over 33W. Works fine on 12V tho, and i think the recommendation for 15V is because they might've picked a bit low quality DC barrel plug?

    Size is also larger than expected, and it's obvious we cannot slim it down enough to fit 24x to 4U. Maybe 18x just maybe. might be as low as 8x.

    Only the 4th USB hub would function as well, and then i realized OFC, these cannot boot from USB stick -.- I have not used SBCs much, very little infact.

    Since the SPI is so easily overwriteable it means that these cannot be used efficiently as dedi, between each user the SPI has to be rewritten. Even if we get petitboot + pxe + automating installation of armbian / dietpi or whatevs, we cannot trust the petitboot on board after a customer has cancelled the node.

    That might be solvable tho. All of those issues might be solvable, but at this point i am not certain we should unless we can get custom PCBs from hardkernel which fixes the issue with wasted space. Maybe?

    On hardware side we'd def need to throw the casing away, desolder the reset button, use angled connectors and get custom rack casing built.

    XU4Q, MC1 or C4 hosting could be possible tho, but non-massive storage + big bandwidth stuff is not exactly our niche -- but hardware side for them would be easier -- no big drives.

    A lot of things to consider! Since for a lot of people the keyword is DEDICATED. Not a VM, but DEDICATED.

    Might need to find a person who would be willing to develop the base software (pxe boot + armbian installation on RAID etc.) for these on our behalf, we just do not have the human resources to tackle that right now, too many other projects.

    We'd probably be willing to build the hardware side of things, if we can find a person who is very familiar with this type of hardware and skilled with embedded software enough to make a secure network reinstallation etc. stuff.

    Inititial impressions however are that our time is better spent on building more traditional hardware. But we'll keep looking into it and playing around with it.

    Oh and as a VM, we can offer more CPU, RAM and Storage for less money than what these would cost. But dedi is always dedi, and you'd get more IOPS too.

    -Aleksi

  • @PulsedMedia said:
    Got the HC4 for today and got to play around with it. Was quite a bit of checking out to get armbian running, and then it would not boot. had to erase petitboot to even try to install it.

    Yeah petitboot is annoying frankly. And just tried to kexec things.

    Power consumption was also higher than expected, and not only due to the craptastic PSU from hardkernel which has power factor of 42% (!!!!)

    Whoa. I'm gonna have to check mine

    -- but actual DC input measured with 2x7200rpm was ~18-19W idle, and peaked during any regular task (ie. installing armbian, running apt) to over 33W. Works fine on 12V tho, and i think the recommendation for 15V is because they might've picked a bit low quality DC barrel plug?

    Yeah good guess.

    Size is also larger than expected, and it's obvious we cannot slim it down enough to fit 24x to 4U. Maybe 18x just maybe. might be as low as 8x.

    Only the 4th USB hub would function as well, and then i realized OFC, these cannot boot from USB stick -.- I have not used SBCs much, very little infact.

    Could boot from USB, PXE, or SATA by installing uboot to SPI

    Since the SPI is so easily overwriteable it means that these cannot be used efficiently as dedi, between each user the SPI has to be rewritten. Even if we get petitboot + pxe + automating installation of armbian / dietpi or whatevs, we cannot trust the petitboot on board after a customer has cancelled the node.

    This got me curious. In theory you could flash the SPI then there may be a pin on SPI to ground to make it read only.

    That might be solvable tho. All of those issues might be solvable, but at this point i am not certain we should unless we can get custom PCBs from hardkernel which fixes the issue with wasted space. Maybe?

    On hardware side we'd def need to throw the casing away, desolder the reset button, use angled connectors and get custom rack casing built.

    XU4Q, MC1 or C4 hosting could be possible tho, but non-massive storage + big bandwidth stuff is not exactly our niche -- but hardware side for them would be easier -- no big drives.

    A lot of things to consider! Since for a lot of people the keyword is DEDICATED. Not a VM, but DEDICATED.

    Might need to find a person who would be willing to develop the base software (pxe boot + armbian installation on RAID etc.) for these on our behalf, we just do not have the human resources to tackle that right now, too many other projects.

    We'd probably be willing to build the hardware side of things, if we can find a person who is very familiar with this type of hardware and skilled with embedded software enough to make a secure network reinstallation etc. stuff.

    Yeah wish I had the time. Something that I've been thinking about for a while.

    Inititial impressions however are that our time is better spent on building more traditional hardware. But we'll keep looking into it and playing around with it.

    Oh and as a VM, we can offer more CPU, RAM and Storage for less money than what these would cost. But dedi is always dedi, and you'd get more IOPS too.

    -Aleksi

    Yep fair conclusion. Radxa 5 is next thing to keep an eye on

  • PulsedMediaPulsedMedia Member, Patron Provider

    @lanefu said: Could boot from USB, PXE, or SATA by installing uboot to SPI

    Thanks! Did not know that. Looks like documentation already exists for that too :)

    @lanefu said: This got me curious. In theory you could flash the SPI then there may be a pin on SPI to ground to make it read only.

    and just adding a dip switch to control RO/RW :)
    Finding and testing that (R&D bit) would probably take half of a workday (4hrs) so in reality 12hrs :D Then adds about 15-20mins to each board setup. already would need to take the soldering iron out because of the reset switch.
    Shame it's possible to actually shutdown the board, otherwise just using the reset switch to reboot it would be sufficient with a cheap optocoupler or so, but need that big expensive relay for the main power too still :(

    Found on the docs as well that petitboot installed default already has ubuntu & debian installer built-in :O Just need to enable from shell, it seems to fake something for PXE boot, could copy & paste the dhcp / pxe code from there, and then just add preseed file.

    But looks like odroid recommended method is still boot from SD card.

    Then when all that is done it's 18 max per 4U. That is 36 drives, and power consumption of about 360W idle, 72GB of ram. Those are pretty much exactly the same as a off the shelf EPyC server, except EPYC would have many times the ram and cpu capability. Funny how power + count of drives is exactly the same lol

    18 boards + mods + cooling & fabricated parts would probably cost about 2000$ + working time, that tho is not enough even for the EPYC server chassis lol

    Thanked by 1lanefu
  • DataIdeas-JoshDataIdeas-Josh Member, Patron Provider

    @PulsedMedia said: Then when all that is done it's 18 max per 4U. That is 36 drives, and power consumption of about 360W idle, 72GB of ram. Those are pretty much exactly the same as a off the shelf EPyC server, except EPYC would have many times the ram and cpu capability. Funny how power + count of drives is exactly the same lol

    18 boards + mods + cooling & fabricated parts would probably cost about 2000$ + working time, that tho is not enough even for the EPYC server chassis lol

    Ahh yes. The dreaded is this actually worth it??? At scale it does make it harder. That is why most of the "big guys" don't do it.
    To me I do it because it's neat and it can open doors to other possibilities.

    Thanked by 1lanefu
Sign In or Register to comment.