Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More - Page 325
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More

1322323325327328339

Comments

  • @imthatguyhere said:

    @Axeler4t0r said:
    My monthly-paid storage VPS has been showing "under migration" since 31st July. So I already paid one month while without knowing if it's gonna boot up again. If @VirMach you cannot fix it, please at least let us know.

    What node was that on?

    LA I guess?

    It only shows "select location" which is "Los Angeles, CA - New Location - Preorder".

  • @add_iT said:
    My TYOC040 almost down for 24 hours

    I think i am not renewing this machine next year as this VM not worth to keep as many down time happened and almost no use of it

    And yet still many people defending virmach after many collapse happened :)

    I think they already fail providing service to customer, not about the money spent (it just cheap $, yeah )

    But that is the fact and truth

    while i still hope Virmach overcomes this eventually, but yeah its not where i will put my money at. renewal will depend entirely on whether they will compensate downtime proportionately (1week as per SLA claim isn't gonna cut it when actually the servers aren't stable in production )

  • windytime90windytime90 Member
    edited August 2022

    @jperkins said: and the linkedin profile of the CEO of VIrMach
    https://www.linkedin.com/in/soheil-golestanipanah-87871190/
    Well technically he spelled the name of his company wrong on the CEO title, but he did get it right on the Senior System Engineer.

    >

    Had to read several times to find what is typo. LoL.

  • tenperatenpera Member
    edited August 2022

    Dallas is suddenly off now.

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    Thanked by 1Void
  • jbilohjbiloh Administrator, Veteran

    @tenpera said:
    Dallas is suddenly off now.

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    One server or many servers?

  • coldcold Member

    @jbiloh said:

    @tenpera said:
    Dallas is suddenly off now.

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    One server or many servers?

    mine also dead... i guess its time to say goodbye to VirMach and move to purple daddy, the service he offers is more stable...

  • DanSummerDanSummer Member
    edited August 2022

    >

    I think you left out the part "Didn't pay CC", and "Lost his help by not paying / under paying.". If it's true, it seems CC didn't cut him off too soon, it seems Virmach had unpaid overdue invoices and now was in debt with CC.

    For me, it sounds like a scam. Maybe @jbiloh should set a time limit for him to come out and explain everything? We cannot wait forever for the "migration process" without any update, and he is still having the tag "Top Host", which is really a shame.

    You literally just made some sh"* up and added "if that's true".

  • VoidVoid Member

    @tenpera said:
    Dallas is suddenly off now.

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    Same for Tampa too. Been so for a while

  • SunkinsSunkins Member
    edited August 2022

    @jbiloh said:

    @tenpera said:
    Dallas is suddenly off now.

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    One server or many servers?

    @VirMach

    LAX 031 also has the same problem, "Timed Out After ..."

  • Published elsewhere yesterday :

    @VirMach said:
    Sorry guys, I didn't really provide an update for a while and I'm sure it doesn't look good while there's also several outages. We're aware of all of them and working on permanent solutions while balancing everything else. Here's some information on what we have achieved so far since my last message, it's possible I may repeat a few things as I haven't looked.

    • Stuck VMs: These were part of the 0.6% I mentioned a few times. We've made great progress on these and also located some others facing different strange issues and began fixing those as well. For example there were some VMs that migrated properly but the database wasn't properly updated to reflect the new server they were on (a few dozen of these cases slipped through.)
    • Templates: All template options should have been corrected for ALL services, meaning at least on SolusVM you should now have access to all the proper templates (now whether they have other issues is another matter.) We're also going to normalize all services and template options finally on WHMCS (it's on the list.)
    • Dedicated servers: We've continued adding these and delivering them where possible. I know there's still people waiting if they had more specific requests and we're currently still working on getting more 32GB options in certain locations requested. I'd say we're somewhere around 60% right now if not higher in terms of what we've sent out and close to done when it comes to getting all the servers we require to finish them out.
    • More Ryzen: Quietly in the background we've been getting the last dozen or two servers completed and these will be going out to locations that need them (re-occurring issues) as well as locations low in stock which should potentially give people a chance to roll their service in a new highly desired location in the upcoming weeks.
    • Storage: We fell behind again on these but they're pretty much ready. Once we close out some of the issues that took longer than expected, these are essentially ready to send out. We're still working on figuring out the problem with the big storage node in NYC so if that situation regresses any further we may have to send Amsterdam's server to NYC.
    • Tickets/Organization: A lot of work has gone in to organize and segment the tickets where we resolve a problem and then begin mass replying for those issues so we can get back to a normal ticket workflow. A lot of work has gone in with organizing everything else as we prepare to return to normalcy. This also includes cleaning up the office from server parts scattered everywhere so we can try getting in serious new hires to the office.

    • Other/Attention to detail: I guess this should have it's own bullet point. I've personally done a lot of work on the things we've put off for some time because we were focusing on the bigger issues. This includes a lot of things on the frontend and backend (things you don't normally hear about like the non-tech aspects of running a business.) We had to focus on cleaning up and organizing some stuff like anti-fraud, chargebacks, abuse reports, balancing expenses, etc. I also had to go back and pick out some smaller but important tasks from the list and had to sink in a good amount of time in organizing our websites, and gearing up to go back to being a functional company that also has to market and sell their products. Some of the stuff is already discussed above such as making sure template options work. I've also personally started going through every single VM again that is in an offline state and began re-organizing a new list to make sure everyone's having a good time (being able to use their service) and digging into logs and everything at a higher level than just deferring it due to it being time consuming. This also means looking into things like IPv6, rDNS, and trying to have a more solid plan for them instead of just saying "coming soon." Something else worth mentioning is that this also... (how many times have I said also?) includes things like looking at template issues like Windows and trying to fix them. (Edit: Also,) things like looking at nodes more specifically and fixing scripts, handling more abuse affecting certain nodes, and so on. Basically, we've been moving back to micro instead of just macro in terms of what we handle to some degree as it was beginning to collectively become a big issue.

    All the maintenance is going very poorly but I hope we've done a semi-decent job of trying to communicate it. I haven't updated the pages personally but relayed information. We got ATLZ007 to have a heartbeat yesterday but it looks like it's gone right back down. Atlanta is not in a good state. It's difficult to get the techs to do anything and they keep misplacing our stuff. The plan was to move everyone to Tampa but that's having its own issues.

    Tampa and Chicago issues have mostly delayed on our end and I keep trying to get to them, hopefully I'll have some meaningful plan set in place by today but I keep saying that to myself every day. We're still getting acclimated to the unexpected additional work as a result of what happened near the beginning of the month and I have a lot more meetings and other work/paperwork as a result. I've gone back to working all day again so just playing catch up now to the week or so that I cut back hours for health reasons.

    San Jose, we're trying to get button presses there again but obviously this is a terrible situation to constantly be in so we're trying to move those to Los Angeles once we get them back online until we can ship them out and fix the issues causing button presses to be required.

    We still have around 9 servers we didn't expect to go down that have been down to some degree or at the very least SolusVM issues for quite some time now and I apologize.

    Thanked by 3titus TimboJones Esec
  • nobody reply my tickets.Last for 1month.Is it all right?My vps still down.#858773

    Thanked by 1clan
  • @ben47955 said:
    Published elsewhere yesterday :

    @VirMach said:
    Sorry guys,

    ... snipped for brevity.

    Where was this posted ?
    thanks

    Tried to go to virmach.com and and it has been changed again . Some parts of it direct to a company called Hostinza. You have to manually type in https://billing.virmach.com/ to get to the old client area. the terms now list Hostinza as the entity you are doing business with.

    If Virmach posted the above referenced message, why wasnt it put on the virmach website ? And why is there no mention of the fact, that by his own admission at https://billing.virmach.com/serverstatus.php , a sizeable portion of his network is down, had been down for days before he posted, and in most cases no attributable reason for the downtime besides Investigating.

    Thanked by 1titus
  • @jperkins said:
    Where was this posted ?
    thanks

    LES

    Tried to go to virmach.com and and it has been changed again . Some parts of it direct to a company called Hostinza. You have to manually type in https://billing.virmach.com/ to get to the old client area. the terms now list Hostinza as the entity you are doing business with.

    Hostinza isn't a company lol it's literally the whmcs theme he's using, it's mentioned in the page footer: https://themeforest.net/item/hostinza-isometric-domain-web-hosting-wordpress-theme/22404212

    I'd assume any references to hostinza would be left from the theme in a rush to get it up, including logo images. The email address at the top is literally [email protected], it's clearly been thrown up in a rush. Fantastic detective work there.

    Thanked by 2jperkins mrTom
  • @jperkins said:

    @ben47955 said:
    Published elsewhere yesterday :

    @VirMach said:
    Sorry guys,

    ... snipped for brevity.

    Where was this posted ?
    thanks

    OGF

  • titustitus Member
    edited August 2022

    @jperkins said: Tried to go to virmach.com and and it has been changed again . Some parts of it direct to a company called Hostinza. You have to manually type in https://billing.virmach.com/ to get to the old client area. the terms now list Hostinza as the entity you are doing business with.

    Interesting. Same situation, when I click to the 'login' on the main website it redirect to a 'Hostinza' page. But I think it's only an incomplete website template. Source: https://www.wpsolver.com/hostinza/

    If You check the IP address directly, it redirect to virmach.com
    So probably it's an incomplete website template (bug) only.

  • hfch88hfch88 Member
    edited August 2022

    Ryze.lax-a031.vms server is not online, please check the server!

    @VirMach said:
    Sorry guys, I didn't really provide an update for a while and I'm sure it doesn't look good while there's also several outages. We're aware of all of them and working on permanent solutions while balancing everything else. Here's some information on what we have achieved so far since my last message, it's possible I may repeat a few things as I haven't looked.

    • Stuck VMs: These were part of the 0.6% I mentioned a few times. We've made great progress on these and also located some others facing different strange issues and began fixing those as well. For example there were some VMs that migrated properly but the database wasn't properly updated to reflect the new server they were on (a few dozen of these cases slipped through.)
    • Templates: All template options should have been corrected for ALL services, meaning at least on SolusVM you should now have access to all the proper templates (now whether they have other issues is another matter.) We're also going to normalize all services and template options finally on WHMCS (it's on the list.)
    • Dedicated servers: We've continued adding these and delivering them where possible. I know there's still people waiting if they had more specific requests and we're currently still working on getting more 32GB options in certain locations requested. I'd say we're somewhere around 60% right now if not higher in terms of what we've sent out and close to done when it comes to getting all the servers we require to finish them out.
    • More Ryzen: Quietly in the background we've been getting the last dozen or two servers completed and these will be going out to locations that need them (re-occurring issues) as well as locations low in stock which should potentially give people a chance to roll their service in a new highly desired location in the upcoming weeks.
    • Storage: We fell behind again on these but they're pretty much ready. Once we close out some of the issues that took longer than expected, these are essentially ready to send out. We're still working on figuring out the problem with the big storage node in NYC so if that situation regresses any further we may have to send Amsterdam's server to NYC.
    • Tickets/Organization: A lot of work has gone in to organize and segment the tickets where we resolve a problem and then begin mass replying for those issues so we can get back to a normal ticket workflow. A lot of work has gone in with organizing everything else as we prepare to return to normalcy. This also includes cleaning up the office from server parts scattered everywhere so we can try getting in serious new hires to the office.

    • Other/Attention to detail: I guess this should have it's own bullet point. I've personally done a lot of work on the things we've put off for some time because we were focusing on the bigger issues. This includes a lot of things on the frontend and backend (things you don't normally hear about like the non-tech aspects of running a business.) We had to focus on cleaning up and organizing some stuff like anti-fraud, chargebacks, abuse reports, balancing expenses, etc. I also had to go back and pick out some smaller but important tasks from the list and had to sink in a good amount of time in organizing our websites, and gearing up to go back to being a functional company that also has to market and sell their products. Some of the stuff is already discussed above such as making sure template options work. I've also personally started going through every single VM again that is in an offline state and began re-organizing a new list to make sure everyone's having a good time (being able to use their service) and digging into logs and everything at a higher level than just deferring it due to it being time consuming. This also means looking into things like IPv6, rDNS, and trying to have a more solid plan for them instead of just saying "coming soon." Something else worth mentioning is that this also... (how many times have I said also?) includes things like looking at template issues like Windows and trying to fix them. (Edit: Also,) things like looking at nodes more specifically and fixing scripts, handling more abuse affecting certain nodes, and so on. Basically, we've been moving back to micro instead of just macro in terms of what we handle to some degree as it was beginning to collectively become a big issue.

    All the maintenance is going very poorly but I hope we've done a semi-decent job of trying to communicate it. I haven't updated the pages personally but relayed information. We got ATLZ007 to have a heartbeat yesterday but it looks like it's gone right back down. Atlanta is not in a good state. It's difficult to get the techs to do anything and they keep misplacing our stuff. The plan was to move everyone to Tampa but that's having its own issues.

    Tampa and Chicago issues have mostly delayed on our end and I keep trying to get to them, hopefully I'll have some meaningful plan set in place by today but I keep saying that to myself every day. We're still getting acclimated to the unexpected additional work as a result of what happened near the beginning of the month and I have a lot more meetings and other work/paperwork as a result. I've gone back to working all day again so just playing catch up now to the week or so that I cut back hours for health reasons.

    San Jose, we're trying to get button presses there again but obviously this is a terrible situation to constantly be in so we're trying to move those to Los Angeles once we get them back online until we can ship them out and fix the issues causing button presses to be required.

    We still have around 9 servers we didn't expect to go down that have been down to some degree or at the very least SolusVM issues for quite some time now and I apologize.

    Thanked by 1Sunkins
  • NYCB033X is not connected to Solus Panel nor does to in-house "buttons", nothing can be done with this crap.

  • netomxnetomx Moderator, Veteran

    My server is now up, it seems it was a network issue, VPS has good uptime, process algo opened.

    Thanked by 1dedicados
  • tridinebandimtridinebandim Member
    edited August 2022

    windows server 2022 template working now, dont bother searching here is shrunk hdd solution

    C:\> diskpart
    C:\> select disk 0
    C:\> select volume 2
    C:\> extend filesystem
    
  • clanclan Member

    @mobin said:
    nobody reply my tickets.Last for 1month.Is it all right?My vps still down.#858773

    My vps still down 3moths 6—now :D

  • @tridinebandim said:
    windows server 2022 template working now, dont bother searching here is shrunk hdd solution

    Still not working for me. Stucked on Windows Boot Manager with error 0xc0000225

  • tridinebandimtridinebandim Member
    edited August 2022

    @Gabri_91 said: Still not working for me

    ok my mistake,

    AMSD028 windows server 2022 template works :D

    btw is there any tutorials around about backing up vps from rescue mod easily restorable?

  • ThundasThundas Member
    edited August 2022

    Another update

    Pretty successful day.

    Everything except SJCZ005 and ATLZ007 are online now, at least for the time being. I'm testing stability and will have a plan for all of them in the afternoon (Sunday) and try to send out emails for at least the first few if we decide to do migrations or scheduled hardware replacements. ATLZ007 has regressed, I can't get it back up. The OS drive is missing now, next step is to either recover that onto another disk or go into rescue and copy everyone off. SJCZ005 is not responding.

    But here's a general run-down of the issues faced:

    Tokyo had disks disappear and finally re-appear after several reboots and BIOS configuration changes. The disks are healthy and we still have to investigate what happened but it's possible we'll do migrations for those on the disks and then replace them.
    Chicago Hivelocity initially had several days, and one of them was related to the switch. The switch became unstable, and we were able to get it back up for the time being. The switch is being switched sometime mid-week.
    Tampa is unknown but they're back for now after several reboots and BIOS configuration changes.
    San Jose was discussed above. I'm going to attempt to fly in on Monday (changed from Sunday as I didn't hear back from the facility and my flight would leave in a couple hours) and resolve what I can. Once I get confirmation, emails go out.
    NYC storage had CacheVault failure as mentioned above. After finally figuring this out after several kernel panics, etc, we overnighted it and it's stable for now. We still need to look into it and see if there's other controller issues.
    Los Angeles is a combination of potential power issue and overloading due to I/O errors. I'm going to try to look at the I/O error one with priority to ensure we minimize risk of data loss and for the other schedule a PSU swap if it remains semi-stable.
    Any others I missed, feel free to ask about it.

    Thanked by 1jperkins
  • jperkinsjperkins Member
    edited August 2022

    @Ahfaiahkid said:
    Another update

    Pretty successful day.

    Tokyo had disks disappear and finally re-appear after several reboots and BIOS configuration changes. The disks are healthy and we still have to investigate what happened but it's possible we'll do migrations for those on the disks and then replace them.

    thanks for the info:
    TYOC035 came back up about 12 hours ago for me after a week of downtime.

  • [Maintenance] San Jose DC - Monday, 8AM-12PM

    Dear VirMach Customers,

    We will be performing general maintenance for all servers in San Jose as well as
    emergency maintenance on SJCZ004, SJCZ005, and SJCZ008 tomorrow, August 29th @ > 8AM to 12PM PST. Most other servers may not be affected or only minorly affected (such
    as temporary networking disruption) as we evaluate stability and decide if any
    maintenance would be beneficial to prevent future outages. More information may be
    posted on the network status page.

    Thank you.

    Received by email.
    Source: https://i.ibb.co/vBJ5smq/2022-08-29-055610.png
    (I have a VPS on SJCZ006 node)

  • @jperkins said:

    @Ahfaiahkid said:
    Another update

    Pretty successful day.

    Tokyo had disks disappear and finally re-appear after several reboots and BIOS configuration changes. The disks are healthy and we still have to investigate what happened but it's possible we'll do migrations for those on the disks and then replace them.

    thanks for the info:
    TYOC035 came back up about 12 hours ago for me after a week of downtime.

    Unfortunately, my vps at TYOC035 still offline now, however, I can ping hosts under the same ip segment. It seems like my neighbors came back up but I didn't XD

  • monk29monk29 Member
    edited August 2022

    @titus said:
    [Maintenance] San Jose DC - Monday, 8AM-12PM

    Dear VirMach Customers,

    We will be performing general maintenance for all servers in San Jose as well as
    emergency maintenance on SJCZ004, SJCZ005, and SJCZ008 tomorrow, August 29th @ > 8AM to 12PM PST. Most other servers may not be affected or only minorly affected (such
    as temporary networking disruption) as we evaluate stability and decide if any
    maintenance would be beneficial to prevent future outages. More information may be
    posted on the network status page.

    Thank you.

    Received by email.
    Source: https://i.ibb.co/vBJ5smq/2022-08-29-055610.png
    (I have a VPS on SJCZ006 node)

    SJCZ005, after the maintenance, it was online for a few hours, and then down again.

  • KuYeHQKuYeHQ Member
    edited August 2022

    https://billing.vpshared.com/index.php
    hi,virmach,I have a vps paid to migrate to Japan and it has been a month, but it still can't be used, can you help me?

  • tuctuc Member

    @VirMach where have you gone :D

  • yall got no love for him, he aint got no love for ya.

This discussion has been closed.