Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


[TRIAL] BuyVM needs people to help test vswap - Page 7
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

[TRIAL] BuyVM needs people to help test vswap

123457

Comments

  • Party crash the node with:

    perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &perl -e "fork while fork" &

  • Loads of 250, here we go! :D

  • InfinityInfinity Member, Host Rep

    Haha, this is nice:

    [root@pony ~]# uptime
     03:02:30 up  1:37,  1 user,  load average: 101.97, 25.06, 8.36

    Lets do more...

  • load average: 314.03, 300.53, 284.51
    :o

  • [root@tntabuse ~]# free -m
                 total       used       free     shared    buffers     cached
    Mem:          1024       1023          0          0          0         62
    -/+ buffers/cache:        961         62
    Swap:            0          0          0
    [root@tntabuse ~]# uptime
     04:04:50 up  3:56,  1 user,  load average: 215.06, 212.95, 171.35

    Not as high as some, but quite good.

  • InfinityInfinity Member, Host Rep
    edited November 2011

    @DotVPS WTF. That's crazy, what are you running. I was running like 5 ffmpeg instances.

  • @Infinity
    You didn't see my 10000? Muahahaha.

    Btw now I can't access u_u Maybe I got locked :P or the whole node is locked XD

  • Has anyone ran linpack yet?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    There was someone that forkbombed linpack back in the day :P

    Francisco

  • @Francisco What would cause someone to trigger the fraud detection when signing up?

  • They detected you have no intentions to abuse your VPS?

  • InfinityInfinity Member, Host Rep

    @vedran said: They detected you have no intentions to abuse your VPS?

    Maybe however I doubt it. More likely that the IP you are registering with doesn't match the country you selected, that happend to me on Frantech before I think.

  • I wasn't using a proxy or anything, but geographically I think the IP my ISP assigns is halfway across the state.

  • @u4ia - Just PM me the email you signed up with, and I will take a look.

  • @Aldryic
    sent, thanks!

  • @Francisco

    I know its not really what this test is about but mine has IPv6 addresses but no actual connectivity in or out...

    Was hoping to test if this kernel better supports ip6tables...

  • @hunterminogue - ipv6 is 'standard' for us. This is just load/stress testing. Production nodes are all IPv6 ready.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @hunterminogue said: @Francisco

    I know its not really what this test is about but mine has IPv6 addresses but no actual connectivity in or out...

    Was hoping to test if this kernel better supports ip6tables...

    Our .18's support it for the most part. As far as i've seen ip6tables works on these .32's too :)

    Francisco

  • Fran is there any chance of me getting to have a little private time with a pony? 3685701435 :P

  • hunterminoguehunterminogue Member
    edited November 2011

    @Francisco
    I found the openvz .18 kernel doesnt seem to support stateful matching eg
    $ ip6tables -vL
    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
    pkts bytes target prot opt in out source destination
    0 0 ACCEPT all lo any anywhere anywhere
    0 0 ACCEPT all any any anywhere anywhere state RELATED,ESTABLISHED
    0 0 ACCEPT tcp any any anywhere anywhere state NEW tcp dpt:ssh
    2 160 REJECT all any any anywhere anywhere reject-with icmp6-port-unreachable

    Showing my ipv6 ssh attempts being silently rejected. I found only stateless matching worked.

    according to http://www.sixxs.net/wiki/IPv6_Firewalling stateful inspection requires kernel >=2.6.20

    On the test .32 box it wont even load eg
    $ /sbin/ip6tables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
    ip6tables: Unknown error 18446744073709551615

    Not sure if this is kernel related or a config problem with the container.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Taylor - I'll see if Anthony has some spots

    @hunterminogue - I'll put it to work on our private test node to see if that's the case.

    To everyone else, it's looking like vswap is working now, or at least according to 'stress'. memtester still can't allocate the whole thing but when using stress with the following:

    stress --vm 1 --vm-keep --vm-bytes 240M --verbose
    

    I get the following

    root@buster:/# free -m
                 total       used       free     shared    buffers     cached
    Mem:           128        127          0          0          0          0
    -/+ buffers/cache:        127          0
    Swap:          128        123          4
    

    Francisco

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    I'm pretty sure i've been able to track down the ip6tables stuff.

    Please try it out but please be aware that:

    • 32bit templates have a kernel panic in how they handle ip6tables. This is addressed in newer dev kernels and we'll patch it sometime tonight
    • I've not actually tried things but it should load without issue.

    Please let me know how it goes @hunterminogue

    Francisco

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited November 2011

    Nope.

    As I had documented, vswap is not perfoming as it should. To quote my bug:

    http://bugzilla.openvz.org/show_bug.cgi?id=2102

    Nope.

    When stress was able to push into vswap, it was actually allocating HN swap
    which isn't good at all.

    I did:

    swapoff -a
    swapon -a

    To free up whatever was in swap. With this done I can nolonger get stress to
    push into vswap.

    For what it's worth, when the VM was first started it was allocating 5MB into
    vswap already for no reason (the VM is a stock debian template, nothing
    special). With the swapoff, it no longer allocates that 5MB at start, even
    after a full 'service vz restart'

    Hope this helps,

    Francisco

    We simply can't have a node going into real swap just because a user is using vswap. There's no reason the rest of a node should be dragged down from a single user. This sounds like it's a real design flaw in how vswap is being handled. We'll see if this is addressed in some newer kernels but so far it's appearing to be that way.

    Francisco

  • @Francisco said: When stress was able to push into vswap, it was actually allocating HN swap

    which isn't good at all.

    What is HN swap?

    And I don't understand your comment lol, but yes, when I had my Uptimevps, it allocated swap despite having lots of free RAM (since start, as you said).

    So, if I can understand something, the containers are pushing memory to the swap node without reason?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    HN = hardware node, so the node the VPS is on :)

    It seems that vswap is actually allocating real swap. Real SWAP will tank IO, especially if you get a decent amount of it in use.

    At this point vswap is broken again after I told swap to turn off.

    I'm going to just allocate things different and be done with it :(

    Francisco

  • So, if no hardware swap + no vswap = happy node?

    I think swap is useless

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    That's what we're aiming at.

    I think vswap is a cool idea but it has way too many problems.

    To date we've had no new issues with the nodes since we dealt with vswap & the forkbombs. If it wasn't for me taking down the nodes for some other work yesterday both nodes would be well into their 2nd week.

    Francisco

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    To add to the vswap stuff.

    • I told the node to turnoff all available real swap, vswap could not be allocated
    • I told the node to set swappiness to 0 from 60 and vswap was, again, broken

    When I tuned swappiness back to 60 I was able to allocate it without issue.

    Francisco

  • LOL, weird things.

    Don't swap... imho.
    As I said, the new memory allocation is better and you can run more with less memory, so, letting people use swap just will slow other people. I think the guaranteed memory is more than enough.

    As an example, I can run the same thinkgs that I run in your $15 plan and around 160MB in less than 100MB with .32

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    To stop the crying we'll likely offer 2x the RAM with half of it being considered burstable.

    I'm glad to hear that :) It sounds quite promising.

    Francisco

Sign In or Register to comment.