Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


frantec luxembourg shared hosting and storage slab - any chance? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

frantec luxembourg shared hosting and storage slab - any chance?

13

Comments

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Arkas said: How about new customers?

    Thanks.

    Yes, but i don't have an ETA for a product launch. Lets get through this mess before I progress onto another one :)

    Francisco

    Thanked by 2niknar1900 Arkas
  • @Francisco said: Lets get through this mess before I progress onto another one.

    Sounds nearly like a plan. :p

  • @Francisco said:

    @Arkas said: I'm a little confused. Is NameCrane only for domains, or will it also be like buyshared which I'd love to get?

    Namecrane will take over Buyshared customers.

    will NameCrane also accept new users and offer the same as buyshared? @Francisco

  • @snevsky said:
    @Francisco It not quite normal to promise downtime for 15-45 minutes but in fact it has been down for almost 5 hours.

    let’s call it “Team Luxembourg”
    © LuxConnect

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @hyperblast said: will NameCrane also accept new users and offer the same as buyshared? @Francisco

    It won't offer the same, it'll be more expensive.

    Francisco

  • @Francisco said:
    It won't offer the same, it'll be more expensive.
    Francisco

    But it still under Frantech, right ??
    Only under different division..

  • postcdpostcd Member
    edited January 2022

    7.5 hours downtime, buyshared directadmin. Seems like a personal fail at https://www.luxconnect.lu ?

  • donkodonko Member
    edited January 2022

    45 minutes eta + LUXCONNECT™ x 6 hours down = franned

    edit: DA resellers are back YAY!

  • SaahibSaahib Host Rep, Veteran

    @donko said:
    45 minutes eta + LUXCONNECT™ x 6 hours down = franned

    You mean you have that franned.it registered from 7 years for this moment ?

  • @donko said:
    45 minutes eta + LUXCONNECT™ x 6 hours down = franned

    edit: DA resellers are back YAY!

    Why not add a visits counter?

    it can be very interesting

  • Must say it's relatively quiet here for

    No rage quits? No losing millions? No "sued" or "chargeback!111". What happen LET?!

    Thanked by 1postcd
  • @JabJab said:
    No rage quits? No losing millions? No "sued" or "chargeback!111". What happen LET?!

    In Luxembourg, the servers are for those who do not need extra hype.

  • @Francisco what are the chances node KVM-12.LU will up today?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @snevsky said: @Francisco what are the chances node KVM-12.LU will up today?

    The nodes up, slabs aren't. I'm just working on the last cluster member before I can spin thing sup.

    If you don't have a slab, then go for it :)

    Francisco

    Thanked by 1snevsky
  • @JabJab said:
    No rage quits? No losing millions? No "sued" or "chargeback!111". What happen LET?!

    Already switched my Lux Proxy to LV before downtime so no need to rush..
    just hope no dmca complaint arrive to my LV Slice..

  • @Francisco said: The nodes up, slabs aren't. I'm just working on the last cluster member before I can spin thing sup.

    I have slab but I prudently de-attached it before moving. VPS is more important for me now (without slab) but still is down.

  • @snevsky said: I have slab but I prudently de-attached it before moving. VPS is more important for me now (without slab) but still is down.

    You clicked power up in panel? I guess Fran don't want to boot those up (because SLAB), but manual should work if you don't need SLAB?

    Thanked by 1snevsky
  • @JabJab it works. Thank you

    Thanked by 1JabJab
  • Back working. 11hrs 13mins, oh well, at least it wasn't my billing time. ;)

  • @AlwaysSkint said:
    Back working. 11hrs 13mins, oh well, at least it wasn't my billing time. ;)

    Man I wish my server will be usable really soon. It almost 24 hours now.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @tommmy said:

    @AlwaysSkint said:
    Back working. 11hrs 13mins, oh well, at least it wasn't my billing time. ;)

    Man I wish my server will be usable really soon. It almost 24 hours now.

    You need to be ticketing then :) That or PM me your IP.

    Everything is fine at this point. Everything minus the router, one of our arista's we're using for our 40gig cross connects, & one of our internal nodes, has been moved over and all in good working order.

    • Slabs are good, just took a bit to address one of the arrays.
    • Slices are also in good shape, just took a few reboots to get things squared.
    • Shared is all good to go, but 1 node had a bad stick of RAM that was kicking off "BA" error codes while trying to POST. The DC swapped the DIMM's and alls well.

    Francisco

    Thanked by 1AlwaysSkint
  • Did you click start? If all those SLABs were done [my server booted] then everything else should be up for like 6 hours already?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @JabJab said:
    Did you click start? If all those SLABs were done [my server booted] then everything else should be up for like 6 hours already?

    Yep. He mentioned 24 hours which means he was in yesterdays batch, not todays. I'm thinking maybe he had a slab and it wouldn't boot since it couldn't connect to the cluster.

    I fired off a mass boot on all nodes once slabs were addressed.

    Francisco

  • tommmytommmy Member
    edited January 2022

    @Francisco said:

    @tommmy said:

    @AlwaysSkint said:
    Back working. 11hrs 13mins, oh well, at least it wasn't my billing time. ;)

    Man I wish my server will be usable really soon. It almost 24 hours now.

    You need to be ticketing then :) That or PM me your IP.

    Everything is fine at this point. Everything minus the router, one of our arista's we're using for our 40gig cross connects, & one of our internal nodes, has been moved over and all in good working order.

    • Slabs are good, just took a bit to address one of the arrays.
    • Slices are also in good shape, just took a few reboots to get things squared.
    • Shared is all good to go, but 1 node had a bad stick of RAM that was kicking off "BA" error codes while trying to POST. The DC swapped the DIMM's and alls well.

    Francisco

    My bad, it is actually around 18 hours since I acknowledge my server is down. I lost count at some point.

    I PM'ed you ticket ID and server IP. Hopefully I can get some insight.

    Edit: Thank you. Issue solved.

  • SaahibSaahib Host Rep, Veteran

    May be now time for @Francisco to offer some kind of floating IP thing to switch servers and for load balancing.

    Thanked by 1donko
  • and you say LowEndTalk providers do not innovate, just look at Franciso!

    *** THIS IS A NETWORK ONLY OUTAGE, YOUR SERVICES WILL REMAIN PHYSICALLY ONLINE FOR THE ENTIRE MOVE ***

    ;')
    Online while network outage, the future is now!

    Rest of message:

    We apologize about the short notice (again).

    In the next 12 to 24 hours we will be migrating our router and cross connects in Luxembourg. Due to LuxConnect using a 3rd party to complete the fiber splicing, we can't set an exact start time.

    To complete this work a full network outage must occur. The actual moving of the router upstairs should go quick since it's a single unit, but given the delays we've had already I won't promise any 'completion' time.

    What we can say is this:

    • Luxconnect claims each XC will take around 10 minutes to splice.

    • We have 4 cross connects in this location so their side will take about an hour to complete.

    • We don't require all 4 cross connects to be completed for service to be restored. Although there will be high latency and poor speeds until the work is fully completed.

    • Root really shouldn't take more than 15 - 20 at the most to move that single unit as it's quite light and doesn't require 2 people. Famous last words.

    • Once we see one of or links go down we'll press on Root to start the router move ASAP.

    Since we're not on site to complete this work ourselves we're at the mercy of our vendors. We're sorry we can't give concrete completion times.

    We thank you for your patience with us as we work through this.

    Thank you for your patronage,

    TL;DR: Router moving to different floor in next 12-24h, short (~20 min to infinity) network outage expected. Servers still powered on.

    Thanked by 1VayVayKa
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @JabJab said: Online while network outage, the future is now!

    physically ;p

    It means we aren't powering down services and such. We have more than a few people that do FDE so they have to be around to unlock their volumes and boot up.

    Francisco

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Saahib said:
    May be now time for @Francisco to offer some kind of floating IP thing to switch servers and for load balancing.

    We already do in the form of anycast globally, and 'floating IPs' for per location.

    If you have multiple services in a single location you can ticket and I'll give you a free floating IP that you can then talk to our route-server with BGP to 'float' between instances. You can also just use keepalived/etc, but that could end up with ARP hold times getting in the way.

    Francisco

  • @Francisco said:
    If you have multiple services in a single location you can ticket and I'll give you a free floating IP that you can then talk to our route-server with BGP to 'float' between instances. You can also just use keepalived/etc, but that could end up with ARP hold times getting in the way.

    sounds complicated, we need a one click button in stallion

    Thanked by 1VayVayKa
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @donko said: sounds complicated, we need a one click button in stallion

    >

    Sounds like you mean some sort of hosted load balancer. Could be interesting but we'd have to charge for it. I don't think there's enough demand for such a feature to justify the development time.

    Francisco

Sign In or Register to comment.