Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Debian Unstable (sid) on OpenVZ - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Debian Unstable (sid) on OpenVZ

2»

Comments

  • @efball
    None of that hosts is using .32
    (probably the BuyVM one yes, but I'm not sure)

    Btw prometeus and vps6 run .32 So...they hate me? XD

  • @yomero said: Btw prometeus and vps6 run .32

    As does Misterhost.de...

  • NickMNickM Member

    FYI, just because uname -a says it's .32 doesn't mean it actually is .32. Some OVZ templates spoof the kernel version so that certain things work. For example, I believe Debian 6 templates do this because of an issue with the version of libc that they're using.

    Thanked by 1TheHackBox
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Yer.

    If you want to know if it's truely .32 just paste what uname -a reports and it's easy to tell :)

    We just had our 'most stable' running .32 node panic last night after 85 days of uptime.

    Francisco

  • @NickM said: FYI, just because uname -a says it's .32 doesn't mean it actually is .32. Some OVZ templates spoof the kernel version so that certain things work.

    @Francisco said: If you want to know if it's truely .32 just paste what uname -a reports and it's easy to tell :)

    So? :-?

  • @NanoG6 said: So? :-?

    wait? @Francisco just posted the message and after a minute you wrote that ^^.

  • @NanoG6 said: So? :-?

    HE is talking about the el5/el6 suffixes and so.
    In Proxmox... I don't know :S

  • @OneTwo said: That's true. Like IPXcore.
    @yomero said: Ipxcore doesn't use .32

    @Damian you do?

    Ehh only one of our nodes runs a .32 kernel at this time. There's a DL360 at Adam's house that's running container start/stop loops on some potential issues I've fixed/changed/hammered on
    2.6.32-042stab049.6, so far it hasn't died yet, after 29 days.

  • KuJoeKuJoe Member, Host Rep

    The only server running a .32 kernel for us is our Proxmox server. :)

  • @Francisco said: If you want to know if it's truely .32

    My end-user test has been more like "if it runs Ubuntu 12.04....it's .32" (because the glibc it uses requires a 2.6.24 or later kernel).

    I don't doubt @Francisco 's experience; the providers using .32 probably have fewer "heavy" users which explains its relative stability.

    Misterhost:2.6.32-042stab044.17
    Prometeus: 2.6.32-042stab053.5

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    44.17 has major crash bugs so i'm surprised they're still even bothering with it.

    The 5x family has been better but still problematic. Hell, we got the box that our shared SQL is on panic a few times now and it's running the latest OVZ .32 since it's proxmox based. Proxmox's kernels are literally the OVZ ones on debian. And no, it isn't a debian issue since we tested on cent as well ;)

    .32 can run 'stable', meaning 60 - 70 days without crashes. Then there's other times that you'll be lucky to see past 2 weeks - We've had both. Infact, one of our .32's crashed just the other night after being up for 80 days.

    Sure, if all of them gave 80 days we could 'try' to justify it, but with the power stability of coresite, people are used to 100+ day uptimes and anything less (like we had on .32) lead to tickets of 'PLEASE MIGRATE ME'.

    http://bugzilla.openvz.org/buglist.cgi?product=OpenVZ&component=kernel&resolution=---&list_id=1493

    Still many reports of random softlocks/deadlocks.

    We actually had a node we were trying to keep on just .32 for our java users but it locked up 3 times over the weekend, forcing Anthony's hand and simply moved it to a .18.

    It isn't a workload thing since we've had it on nodes that are normally ~1 load and they still deadlocked.

    At some point .32 will be nice but the biggest issue is that unless it's an obvious coding error, we'll likely get different results on different nodes, even though all of the equipment is identical and fully burned in.

    Francisco

Sign In or Register to comment.