Nexus Manual Backup

Get today's popular Digital Trends articles in your inbox: Oh no! You dropped your phone in the street. Then it was run over. And then you were almost hit by a Subaru trying to rescue it from the middle of the crosswalk.

Nexus Manual Backup

If this sad story sounds familiar — or at least plausible — chances are the first thought racing through your head, as an onslaught cars race over your phone, is of all the contacts, photos, text messages, and notes stored on your device. What’s an Android aficionado to do in a case like this? Like your mother once said, “Plan ahead.” Planning ahead is the easiest way to make sure your data isn’t lost to the ether, even if your phone is destroyed. Luckily, Google automatically syncs your contacts, calendar appointments, docs, and even app purchases — as long as you give it permission to do so. While Google will preserve a lot of your data, there are other methods and backup programs that will allow you to save the same data. Read on to find out how to back up your Android phone’s content to your PC.

I've had a number of non-programmers ask me 'What is Nexus?' What does it do? You'd think I'd have a quick answer for the question, and I have a few that.

Nexus Manual Backup

Stick with Google Giving Google permission to back up your stuff will vary slightly from phone to phone. In general, you’ll want to go to Settings >Backup & reset then tap Backup my data and Automatic restore. That will cover the following: • Google Calendar settings • Wi-Fi networks & passwords • Home screen wallpapers • Gmail settings • Apps installed through Google Play (backed up on the Play Store app) • Display settings (Brightness & Sleep) • Language & Input settings • Date & Time • Third-party app settings & data (varies by app). You’re not done yet. While in Settings go to Accounts and click on your Google account.

You’ll see a long list of sync icons covering App data, Calendar, Contacts, Docs, Gmail, Photos, and any other service you can virtually back up. Make sure there’s a check in the box next to everything you want backed up. But that’s not the only backup trick Google has up its sleeve.

If you use Google’s Music service, all of your tunes will be preserved on Google’s servers, even if both your phone and your computer die at the same time. If you have a large music collection, like we do, the initial upload process will take a long time — we’re talking days. But once the first upload is done, subsequent albums will upload as they are added to your collection.

Your music can then be streamed on up to ten Android devices or to other computers. Drag and drop content directly from your device. Photos, videos, and music from your Android phone may also be transferred directly to your PC or Mac by plugging your phone into your computer and manually copying the files over to your hard drive. It’s not a perfect solution, but it’s quick and easy, especially on a PC where Windows will mount it as an external drive and use Media Transfer Protocol. What if Windows doesn’t detect my smartphone? Do you have the correct USB cable? Many users often try to connect their smartphones to their computers with any MicroUSB cable they have lying around, but this may be the reason why your smartphone isn’t showing up in Windows.

In the picture above, for example, the third-party cable on the left is only able to charge an Android smartphone. The official LG MicroUSB cable on the right, however, is able to get the USB connection notification to appear in the notification area. Once done, your smartphone will be listed in the Windows File Explorer as one of your drives. If you’re using a Mac, download, install the software, and run it upon connecting your phone. It’ll start up automatically after that.

Go with a third-party backup utility My Backup Pro If we were to design a straightforward backup system for Android, it would probably work just like My Backup Pro. Available in the ($5), this app backs up everything that’s possible to back up without having your phone rooted — photos, app data, browser bookmarks, contacts, system settings, home screen shortcuts, alarms, calendars, MMS messages, SMS messages, music, and more.

The app allows you to schedule backups at convenient times, like when you’re sleeping, and saves the backup files either to the MicroSD card in your phone or to the cloud, making your data instantly accessible at the. If your phone dies or if you move to a new phone, use My Backup Pro’s software to restore all of your settings, data, and apps in a single session. SMS Backup & Restore Want to preserve every last drunken text message for posterity? Is a free app that integrates with your email account, Google Drive, or Dropbox to back up your SMS messages in XML format. You can store backups on your computer and send them via email. It’s possible to view and restore your messages selectively, or all at once. You can also use the app to schedule regular backups. Use your device manufacturer’s software Near enough every smartphone manufacturer out there offers some kind of backup solution for your device.

Most of them are shifting away from computer-based backups to easy switching apps that let you port across your contacts, photos, messages, and the rest. Here’s a list of some of the options: • • • • • If you have rooted your Android device, then you should have a look. It’s a powerful backup tool that’s packed with power-user features. Back that phone up! Joe would saying, “Knowing is half the battle.” The other half of the battle is backing up your data in case your phone accidentally meets the wheels of a truck. Google is definitely an ally in the backup battle, but you’ll want to enlist the assistance of the apps above to ensure all of your photos, notes, and messages are protected. What are you waiting for?

It’s time to start backing up your phone!

Introduction This document describes the Nexus 7000 Supervisor 2/2E compact flash failure issue documented in software defect, all the possible failure scenarios, and recovery steps. Prior to any workaround, it is strongly recommended to have physical access to the device in case a physical reseat is required. For some reload upgrades, console access may be required, and it is always recommended to perform these workarounds with console access to the supervisor to observe the boot process. If any of the steps in the workarounds fail, contact Cisco TAC for additional possible recovery options. Background Each N7K supervisor 2/2E is equipped with 2 eUSB flash devices in RAID1 configuration, one primary and one mirror. Together they provide non-volatile repositories for boot images, startup configuration and persistent application data. What can happen is over a period of months or years in service, one of these devices may be disconnected from the USB bus, causing the RAID software to drop the device from the configuration.

The device can still function normally with 1/2 devices. However, when the second device drops out of the array, the bootflash is remounted as read-only, meaning you cannot save configuration or files to the bootflash, or allow the standby to sync to the active in the event it is reloaded. There is no operational impact on systems running in a dual flash failure state, however a reload of the affected supervisor is needed to recover from this state.

Furthermore, any changes to running configuration will not be reflected in startup and would be lost in the event of a power outage. Symptoms These symptoms have been seen: • Compact flash diagnostic failure switch# show diagnostic result module 5 Current bootup diagnostic level: complete Module 5: Supervisor module-2 (Standby) Test results: (.

= Pass, F = Fail, I = Incomplete, U = Untested, A = Abort, E = Error disabled) 1) ASICRegisterCheck------------->. 2) USB--------------------------->. 3) NVRAM------------------------->. 4) RealTimeClock----------------->. 5) PrimaryBootROM---------------->.

6) SecondaryBootROM-------------->. 7) CompactFlash------------------>F. 9) PwrMgmtBus-------------------->U 10) SpineControlBus--------------->.

11) SystemMgmtBus----------------->U 12) StatusBus--------------------->U 13) StandbyFabricLoopback--------->. 14) ManagementPortLoopback-------->. 15) EOBCPortLoopback-------------->. 16) OBFL-------------------------->. Load flash recovery tool to repair bootflash. You can download the recovery tool from CCO under utilities for the N7000 platform or use the link below: It is wrapped in a tar gz compressed file, please uncompress it to find the.gbin recovery tool and a.pdf readme. Review the readme file, and load the.gbin tool onto bootflash of the N7K.

While this recovery is designed to be non-impacting and can be performed live, TAC recommends to perform in a Maintenance Window in case any unexpected issues arise. After the file is on bootflash, you can run the recovery tool with. Switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices.

Since the RAID is only a single failed disk, standby synchronization to the active should be possible. If this is an option, see if the standby fully syncs to the active with 'show module' and 'show system redundancy status' to verify the standby is in 'ha-standby' status. This indicates a Stateful Switchover (SSO) should be possible using the 'system switchover' command. After the standby is up, make sure configuration is saved externally, 'copy run tftp: vdc-all', and then fully save to startup with 'copy run start vdc-all'. After this you can attempt 'system switchover', which will reload the current active and force the current standby into active. After the active is reloaded into standby, it should automatically recover its RAID array. You can verify this after the reloaded supervisor is back up in 'ha-standby' status and perform a 'slot x show system internal raid' to verify all disks are [UU].

If the disks are still not fully back up, attempt to run the recovery tool again to try and clear up any lingering issues. If this is still not successful, you can try an 'out-of-service module x' for the affected module, followed by a 'no poweroff module x'.

If this still is not successful, please attempt physically reseating the affected module. If it is yet still not recovered, this could be legitimate HW failure and require an RMA, however you can attempt to reload into switch boot mode using the password recovery procedure and perform an 'init system' as a final attempt at recovery. If no spare supervisor is available, a full reload is necessarily with the 'reload' command. In this case it would be recommended to have physical access to the device in case a physical reseat is required. Have all running configurations backed up externally, and is recommended to have them present on a USB disk along with the system and kickstart images to be safe. After the reload is performed and the device is up, check the RAID status is [UU], and run the recovery tool if it does not look fully repaired. If the system is not coming up or the recovery tool is still not working, phsically reseat the supervisor module and observe the boot process via console.

If a physical reseat does not recover, break into loader using the password recovery procedure, enter switch boot mode by booting the kickstart image, then perform an 'init system' to try and reinitialize the bootflash. This would wipe files on the bootflash, so it is crucial to have all necessary files and configuration backed up prior to these steps. Reload the device, it is strongly recommended to have console access and physical access may be required. The supervisor should reload and repair its bootflash. After the system is up, verify that both disks are up and running with the [UU] status in ' show system internal file /proc/mdstat' and ' show system internal raid'.

If both disks are up and running then the recovery is complete and you can work to restore all previous configuration. If recovery was unsuccessful or partially successful go to step 3.

Note: It is commonly seen in instances of dual flash failures, a software reload might not fully recover the RAID and could require running the recovery tool or subsequent reloads to recover. In almost every occurrence, it has been resolved with a physical reseat of the supervisor module. Therefore, if physical access to the device is possible, after backing up configuration externally, you can attempt a quick recovery that has the highest chance of succeeding by physically reseating the supervisor when ready to reload the device. This will fully remove power from the supervisor and should allow the recovery of both disks in the RAID. Proceed to Step 3 if the physical reseat recovery is only partial, or Step 4 if it is entirely not successful in that the system is not fully booting. If all else fails, it is likely a rare case of true hardware failure, and the supervisor would need to be RMA'd and possibly EFA'd.

This is why all configuration must be externally backed up prior to recovery steps, in case an emergency RMA is required you have all necessary configuration to swiftly bring the system back up. Dual Supervisor Failure Scenarios Scenario C (0 Fails on the Active, 1 Fail on the Standby) Failure Scenario: 0 Fails on the Active 1 Fail on the Standby Steps to Resolution: In the scenario of a dual supervisor setup, with no flash failures on the active and a single failure on the standby, a non impacting recovery can be performed. As the active has no failures and the standby only has a single failure, the Flash Recovery Tool can be loaded onto the active and executed. After running the tool, it will automatically copy itself to the standby and attempt to resync the array. The recovery tool can be downloaded here: Once you have downloaded the tool, unzipped it, and uploaded it to the bootflash of the box, you will need to execute the following command to begin the recovery: # load bootflash:n7000-s2-flash-recovery-tool.10.0.2.gbin The tool will start running and detect disconnected disks and attempt to resync them with the RAID array.

You can check the recovery status with: # show system internal file /proc/mdstat Verify that recovery is proceeding, it may take several minutes to fully repair all disks to a [UU] status. An example of a recovery in operation looks as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices: After recovery is finished it should look as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6:active raid1 sdd6[1] sdc6[0] 77888 blocks [2/2] [UU] After all disks are in [UU], the RAID array is fully back up with both disks sync'd. If the Flash Recovery Tool is unsuccessful, since the active has both disks up, the standby should be able to successfully sync to the active on reload. Therefore, in a scheduled window, perform a ' out-of-service module x' for the standby supervisor, it is recommended to have console access to the standby to observe the boot process in the case any unexpected issues arise. After the supervisor is down, wait a few seconds and then perform 'no poweroff module x' for the standby. Wait until the standby fully boots into the 'ha-standby' status.

After the standby is back up, check the RAID with ' slot x show system internal raid' and ' slot x show system internal file /proc/mdstat'. If both disks are not fully back up after reload, run the recovery tool again. If the reload and recovery tool are not successful, it would be recommended to attempt physically reseating the standby module in the window to try and clear the condition. If physical reseat is not successful, try performing an 'init system' from switch boot mode by following the password recovery steps to break into this mode during boot. If still unsuccessful, contact TAC to attempt manual recovery. Scenario D (1 Fail on the Active, 0 Fails on the Standby) Recovery Scenario: 1 Fail on the Active 0 Fails on the Standby Steps to Resolution: In the scenario of a dual supervisor setup, with 1 flash failure on the active and no failures on the standby, a non impacting recovery can be performed by using the Flash Recovery Tool. As the standby has no failures and the active only has a single failure, the Flash Recovery Tool can be loaded onto the active and executed.

After running the tool, it will automatically copy itself to the standby and attempt to resync the array. The recovery tool can be downloaded here: Once you have downloaded the tool, unzipped it, and uploaded it to the bootflash of the active, you will need to execute the following command to begin the recovery: # load bootflash:n7000-s2-flash-recovery-tool.10.0.2.gbin The tool will start running and detect disconnected disks and attempt to resync them with the RAID array. You can check the recovery status with: # show system internal file /proc/mdstat Verify that recovery is proceeding, it may take several minutes to fully repair all disks to a [UU] status. An example of a recovery in operation looks as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices: After recovery is finished it should look as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6:active raid1 sdd6[1] sdc6[0] 77888 blocks [2/2] [UU] After all disks are in [UU], the RAID array is fully back up with both disks sync'd.

If the Flash Recovery Tool is unsuccessful, the next step would be to perform a ' system switchover' to failover the supervisor modules in a maintenance window. Therefore, in a scheduled window, perform a ' system switchover', it is recommended to have console access to observe the boot process in the case any unexpected issues arise.

Wait until the standby fully boots into the 'ha-standby' status. After the standby is back up, check the RAID with ' slot x show system internal raid' and ' slot x show system internal file /proc/mdstat'. If both disks are not fully back up after reload, run the recovery tool again. If the reload and recovery tool are not successful, it would be recommended to attempt physically reseating the standby module in the window to try and clear the condition. If physical reseat is not successful, try performing an 'init system' from switch boot mode by following the password recovery steps to break into this mode during boot. If still unsuccessful, contact TAC to attempt manual recovery.

Scenario E (1 Fail on the Active, 1 Fail on the Standby) Recovery Scenario: 1 Fail on the Active 1 Fail on the Standby Steps to Resolution: In the event of a single flash failure on both the active and standby, a non impacting workaround can still be accomplished. As no supervisor is in a read-only state, the first step is to attempt using the Flash Recovery Tool. The recovery tool can be downloaded here: Once you have downloaded the tool, unzipped it, and uploaded it to the bootflash of the active, you will need to execute the following command to begin the recovery: # load bootflash:n7000-s2-flash-recovery-tool.10.0.2.gbin It will automatically detect disconnected disks on the active and attempt repair, as well as automatically copy itself to standby and detect and correct failures there. You can check the recovery status with: # show system internal file /proc/mdstat Verify that recovery is proceeding, it may take several minutes to fully repair all disks to a [UU] status.

An example of a recovery in operation looks as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices: After recovery is finished it should look as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6:active raid1 sdd6[1] sdc6[0] 77888 blocks [2/2] [UU] After all disks are in [UU], the RAID array is fully back up with both disks sync'd. If both supervisors recover into the [UU] status, then recovery is complete. If recovery is partial or did not succeed go to Step 2.

In the event that the recovery tool did not succeed, identify the current state of the RAID on the modules. If there is still a single flash failure on both, attempt a 'system switchover' which will reload the current active and force the standby to the active role. After the previous active is reloaded back into 'ha-standby', check its RAID status as it should be recovered during the reload. If the supervisor successfully recovers after the switchover, you can try running the flash recovery tool again to try and repair the single disk failure on the current active supervisor, or another 'system switchover' to reload the current active and force the previous active and current standby that was repaired back to the active role.

Verify the reloaded supervisor has both disks repaired again, re-run the recovery tool if necessary. If during this process the switchover is not fixing the RAID, perform an ' out-of-service module x' for the standby and then ' no poweroff module x' to fully remove and re-apply power to the module. If out of service is not succesful, attempt a physical reseat of the standby. If after running the recovery tool one supervisor recovers its RAID and the other still has a failure, force the supervisor with the single failure to standby with a 'system switchover' if necessary. If the supervisor with a single failure is already standby, do an 'out-of-service module x' for the standby and 'no poweroff module x' to fully remove and reapply power to the module.

If it is still not recovering, attempt a physical reseat of the module. In the event a reseat does not fix, break into the switch boot prompt using the password recovery procedure and do an 'init system' to reinitialize the bootflash.

If this is still unsuccessful, have TAC attempt manual recovery. Note: If at any point the standby is stuck in a 'powered-up' state and not 'ha-standby', if unable to get the standby fully up with the steps above, a chassis reload will be required.

Scenario F (2 Fails on the Active, 0 Fails on the Standby) Recovery Scenario: 2 Fails on the Active 0 Fails on the Standby Steps to Resolution: With 2 failures on the active and 0 on the standby supervisor, a non-impacting recovery is possible, depending on how much of the running-configuration has been added since the standby was unable to sync its running-config with the active. The recovery procedure will be to copy the current running configuration from the active supervisor, failover to the healthy standby supervisor, copy the missing running configuration to the new active, manually bring the previous active online, then run the recovery tool. Backup all running configuration externally with ' copy running-config tftp: vdc-all'. Please note that in the occurrence of dual flash failure, configuration changes since the system remounted to read-only are not present on the startup configuration.

You can review ' show system internal raid' for the affected module to determine when the second disk failed which is where the system goes read-only. From there you can review ' show accounting log' for each VDC to determine what changes were made since the dual flash failure so you will know what to add if the startup configuration persists upon reload. Please note that it is possible that startup configuration is wiped upon reload of a supervisor with dual flash failure, which is why the configuration must be backed up externally.

Once the running-configuration has been copied off of the active supervisor, it will be a good idea to compare it to the start-up configuration to see what has changed since the last save. This can be seen with ' show startup-configuration'.

The differences will of course be completely dependent on the environment, but it is good to be aware of what may be missing when the standby comes online as the active. It is also a good idea to have the differences already copied out in a notepad so that they can be quickly added to the new active supervisor after the switchover. After the differences have been evaluated, you will need to perform a supervisor switchover. TAC recommends that this is done during a maintenance window, as unforseen issues may occur. The command to perform the failover to the standby will be ' system switchover'. The switchover should occur very quickly and the new standby will begin rebooting. During this time you will want to add any missing configuration back to the new active.

This can be done by copying the configuration from the TFTP server (or wherever it was saved previously) or by simply manually adding the configuration in the CLI. In most instances the missing configurations are very short and the CLI option will be the most feasible. After some time the new standby supervisor may come back online in an 'ha-standby' state, but what normally occurs is that it gets stuck in a 'powered-up' state.

The state can be viewed using the ' show module' command and referring to the 'Status' column next to the module. If the new standby comes up in a 'powered-up' state, you will need to manually bring it back online. This can be done by issuing the following commands, where 'x' is the standby module stuck in a 'powered-up' state: (config)# out-of-service module x (config)# no poweroff module x 6.

Once the standby is back online in an 'ha-standby' state, you will then need to run the recovery tool to ensure that the recovery is complete. The tool can be downloaded at the following link: Once you have downloaded the tool, unzipped it, and uploaded it to the bootflash of the box, you will need to execute the following command to begin the recovery: # load bootflash:n7000-s2-flash-recovery-tool.10.0.2.gbin The tool will start running and detect disconnected disks and attempt to resync them with the RAID array. You can check the recovery status with: # show system internal file /proc/mdstat Verify that recovery is proceeding, it may take several minutes to fully repair all disks to a [UU] status. An example of a recovery in operation looks as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices: After recovery is finished it should look as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6:active raid1 sdd6[1] sdc6[0] 77888 blocks [2/2] [UU] After all disks are in [UU], the RAID array is fully back up with both disks sync'd.

Scenario G (0 Fails on the Active, 2 Fails on the Standby) 0 Fails on the Active, 2 on the Standby Recovery Scenario: 0 Fails on the Active 2 Fails on the Standby Steps to Resolution: With 0 failures on the active and 2 on the standby supervisor, a non-impacting recovery is possible. The recovery procedure will be to perform a reload of the standby. It is commonly seen in supervisors with a dual flash failure that a software 'reload module x' may only partially repair the RAID or have it get stuck powered-up upon reboot. Therefore, it is recommended to either physically reseat the supervisor with dual flash failure to fully remove and reapply power to the module, or you can perform the following (x for standby slot #): # out-of-service module x # no poweroff module x If you see that the standby keeps getting stuck in the powered-up state and ultimately keeps power cycling after the steps above, this is likely due to the active reloading the standby for not coming up in time. This may be due to the booting up standby attempting to re-initialize its bootflash/RAID, which can take up to 10 minutes, but it keeps being reset by the active before it can accomplish. To resolve this, configure the following using 'x' for the standby slot # stuck in powered-up: (config)# system standby manual-boot (config)# reload module x force-dnld The above will make it so the active does not automatically reset the standby, and then reload the standby and force it to sync its image from the active.

Wait 10-15 minutes to see if the standby is finally able to get to ha-standby status. After it is in ha-standby status, re-enable automatic reboots of the standby with: (config)# system no standby manual-boot 6. Once the standby is back online in an 'ha-standby' state, you will then need to run the recovery tool to ensure that the recovery is complete. The tool can be downloaded at the following link: Once you have downloaded the tool, unzipped it, and uploaded it to the bootflash of the box, you will need to execute the following command to begin the recovery: # load bootflash:n7000-s2-flash-recovery-tool.10.0.2.gbin The tool will start running and detect disconnected disks and attempt to resync them with the RAID array. You can check the recovery status with: # show system internal file /proc/mdstat Verify that recovery is proceeding, it may take several minutes to fully repair all disks to a [UU] status. An example of a recovery in operation looks as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices: After recovery is finished it should look as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6:active raid1 sdd6[1] sdc6[0] 77888 blocks [2/2] [UU] After all disks are in [UU], the RAID array is fully back up with both disks sync'd. Scenario H ( 2 Fails on the Active, 1 on the Standby) 2 Fails on the Active, 1 on the Standby Recovery Scenario: 2 Fails on the Active 1 Fails on the Standby Steps to Resolution: With 2 failures on the active and 1 on the standby supervisor, a non-impacting recovery is possible, depending on how much of the running-configuration has been added since the standby was unable to sync its running-config with the active.

The recovery procedure will be to backup the current running configuration from the active supervisor, failover to the healthy standby supervisor, copy the missing running configuration to the new active, manually bring the previous active online, then run the recovery tool. Backup all running configuration externally with 'copy running-config tftp: vdc-all'. Please note that in the occurrence of dual flash failure, configuration changes since the system remounted to read-only are not present on the startup configuration.

You can review 'show system internal raid' for the affected module to determine when the second disk failed which is where the system goes read-only. From there you can review 'show accounting log' for each VDC to determine what changes were made since the dual flash failure so you will know what to add if the startup configuration persists upon reload. Please note that it is possible that startup configuration is wiped upon reload of a supervisor with dual flash failure, which is why the configuration must be backed up externally. Once the running-configuration has been copied off of the active supervisor, it will be a good idea to compare it to the start-up configuration to see what has changed since the last save. This can be seen with 'show startup-configuration'. The differences will of course be completely dependent on the environment, but it is good to be aware of what may be missing when the standby comes online as the active.

It is also a good idea to have the differences already copied out in a notepad so that they can be quickly added to the new active supervisor after the switchover. After the differences have been evaluated, you will need to perform a supervisor switchover. TAC recommends that this is done during a maintenance window, as unforseen issues may occur. The command to perform the failover to the standby will be 'system switchover'. The switchover should occur very quickly and the new standby will begin rebooting.

During this time you will want to add any missing configuration back to the new active. This can be done by copying the configuration from the TFTP server (or wherever it was saved previously) or by simply manually adding the configuration in the CLI, do not copy directly from tftp to running-configuration, copy to bootflash first, and then to running configuration. In most instances the missing configurations are very short and the CLI option will be the most feasible. After some time the new standby supervisor may come back online in an 'ha-standby' state, but what normally occurs is that it gets stuck in a 'powered-up' state.

The state can be viewed using the 'show module' command and referring to the 'Status' column next to the module. If the new standby comes up in a 'powered-up' state, you will need to manually bring it back online. This can be done by issuing the following commands, where 'x' is the standby module stuck in a 'powered-up' state: (config)# out-of-service module (config)# no poweroff module x If you see that the standby keeps getting stuck in the powered-up state and ultimately keeps power cycling after the steps above, this is likely due to the active reloading the standby for not coming up in time. This may be due to the booting up standby attempting to re-initialize its bootflash/RAID, which can take up to 10 minutes, but it keeps being reset by the active before it can accomplish. To resolve this, configure the following using 'x' for the standby slot # stuck in powered-up: (config)# system standby manual-boot (config)# reload module x force-dnld The above will make it so the active does not automatically reset the standby, and then reload the standby and force it to sync its image from the active. Wait 10-15 minutes to see if the standby is finally able to get to ha-standby status.

After it is in ha-standby status, re-enable automatic reboots of the standby with: (config)# system no standby manual-boot 6. Once the standby is back online in an 'ha-standby' state, you will then need to run the recovery tool to ensure that the recovery is complete and to repair the single disk failure on the active. The tool can be downloaded at the following link: Once you have downloaded the tool, unzipped it, and uploaded it to the bootflash of the box, you will need to execute the following command to begin the recovery: # load bootflash:n7000-s2-flash-recovery-tool.10.0.2.gbin The tool will start running and detect disconnected disks and attempt to resync them with the RAID array. You can check the recovery status with: # show system internal file /proc/mdstat Verify that recovery is proceeding, it may take several minutes to fully repair all disks to a [UU] status. An example of a recovery in operation looks as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices: After recovery is finished it should look as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6:active raid1 sdd6[1] sdc6[0] 77888 blocks [2/2] [UU] After all disks are in [UU], the RAID array is fully back up with both disks sync'd. If the current active with a single failure is not recovered by the recovery tool, attempt another 'system switchover' ensuring your current standby is in 'ha-standby' status.

If still not successfull please contact Cisco TAC Scenario I (1 Fail on the Active, 2 Fails on the Standby) Recovery Scenario: 1 Fail on the Active 2 Fails on the Standby Steps to Resolution: In a dual supervisor scenario with 1 failure on the active and 2 failures on the standby supervisor a non-impacting recovery can be possible, but in many cases a reload may be necessary. The process will be to first back up all running configuratoins, then attempt to recover the failed compact flash on the active usingt he recovery tool, then, if successful, you will manually reload the standby and run the recovery tool again. If the initial recovery attempt is unable to recover the failed flash on the active, TAC must be engaged to attempt a manual recovery using the debug plugin. Backup all running configuration externally with ' copy running-config tftp: vdc-all'. You may also copy the running-config to a local USB stick if a TFTP server is not set up in the environment.

Once the current running-configuration is backed up, you will then need to run the recovery tool to attempt a recovery of the failed flash on the active. The tool can be downloaded at the following link: Once you have downloaded the tool, unzipped it, and uploaded it to the bootflash of the box, you will need to execute the following command to begin the recovery: # load bootflash:n7000-s2-flash-recovery-tool.10.0.2.gbin The tool will start running and detect disconnected disks and attempt to resync them with the RAID array. You can check the recovery status with: # show system internal file /proc/mdstat Verify that recovery is proceeding, it may take several minutes to fully repair all disks to a [UU] status.

An example of a recovery in operation looks as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6: active raid1 sdd6[2] sdc6[0] 77888 blocks [2/1] [U_]...] recovery = 8.3% (1240) finish=2.1min s peed=12613K/sec unused devices: After recovery is finished it should look as follows: switch# show system internal file /proc/mdstat Personalities: [raid1] md6:active raid1 sdd6[1] sdc6[0] 77888 blocks [2/2] [UU] After all disks are in [UU], the RAID array is fully back up with both disks sync'd. If, after running the Recovery Tool in step 2, you are not able to recover the failed compact flash on the active supervisor, you must contact TAC to attempt a manual recovery using the linux debug plugin. After verifying that both flashes show as '[UU]' on the active, you can proceed with manually rebooting the standby supervisor. This can be done by issuing the following commands, where 'x' is the standby module stuck in a 'powered-up' state: (config)# out-of-service module x (config)# no poweroff module x This should bring the standby supervisor back into an 'ha-standby' state (this is checked by viewing the Status column in the ' show module' output). If this is successful proceed to step 6, if not, try the procedure outlined in step 5.

If you see that the standby keeps getting stuck in the powered-up state and ultimately keeps power cycling after the steps above, this is likely due to the active reloading the standby for not coming up in time. This may be due to the booting up standby attempting to re-initialize its bootflash/RAID, which can take up to 10 minutes, but it keeps being reset by the active before it can accomplish. To resolve this, configure the following using 'x' for the standby slot # stuck in powered-up: (config)# system standby manual-boot (config)# reload module x force-dnld The above will make it so the active does not automatically reset the standby, and then reload the standby and force it to sync its image from the active.

Wait 10-15 minutes to see if the standby is finally able to get to ha-standby status. After it is in ha-standby status, re-enable automatic reboots of the standby with: (config)# system no standby manual-boot 6. Once the standby is back online in an 'ha-standby' state, you will then need to run the recovery tool to ensure that the recovery is complete. You can run the same tool that you have on the active for this step, no additional download is needed as the recovery tool runs on the active and the standby.

Scenario J (2 Fails on the Active, 2 Fails on the Standby) Recovery Scenario: 2 Fails on the Active 2 Fails on the Standby Steps to Resolution. Backup all running configuration externally with ' copy running-config tftp: vdc-all'. Please note that in the occurrence of dual flash failure, configuration changes since the system remounted to read-only are not present on the startup configuration. You can review ' show system internal raid' for the affected module to determine when the second disk failed which is where the system goes read-only.

From there you can review ' show accounting log' for each VDC to determine what changes were made since the dual flash failure so you will know what to add if the startup configuration persists upon reload. Reload the device, it is strongly recommended to have console access and physical access may be required. The supervisor should reload and repair its bootflash. After the system is up, verify that both disks are up and running with the [UU] status in ' show system internal file /proc/mdstat' and ' show system internal raid'. If both disks are up and running then the recovery is complete and you can work to restore all previous configuration. If recovery was unsuccessful or partially successful go to step 3.

Note: It is commonly seen in instances of dual flash failures, a software 'reload' may not fully recover the RAID and could require running the recovery tool or subsequent reloads to recover. In almost every occurrence, it has been resolved with a physical reseat of the supervisor module. Therefore, if physical access to the device is possible, after backing up configuration externally, you can attempt a quick recovery that has the highest chance of succeeding by physically reseating the supervisor when ready to reload the device.

This will fully remove power from the supervisor and should allow the recovery of both disks in the RAID. Proceed to Step 3 if the physical reseat recovery is only partial, or Step 4 if it is entirely not successful in that the system is not fully booting. If after completing all of the above steps the recovery is unsuccessful, it is likely a rare case of true hardware failure, and the supervisor will need to be replaced via RMA. This is why all configuration must be externally backed up prior to recovery steps, in case an emergency RMA is required you have all necessary configuration to swiftly bring the system back up. Summary FAQs Is there a permanent solution to this issue?

See the Long Term Solutions section below. Why is it not possible to recover a dual failover on the active and standby by reloading the standby supervisor and failing over? The reason this is not possible is because in order to allow the standby supervisor to come up in an 'ha-standby' state, the active supervisor must write several things to its compact flash (SNMP info, etc.), which it cannot do if it has a dual flash failure itself. What happens if the Flash Recovery Tool is unable to remount the compact flash? Contact Cisco TAC for options in this scenario.

Does this bug also affect the Nexus 7700 Sup2E? There is a separate defect for the N7700 Sup2E -. The recovery tool will not work for the N7700. Does the recovery tool work for NPE images? The recovery tool does not work for NPE images. Will an ISSU to a resolved version of code resolve this issue? An ISSU will utilize a supervisor switchover, which may not perform correctly due to the compact flash failure.

We reset the affected board. Fone Rescue Serial. Raid status prints 0xF0, but GOLD tests still fails?

RAID status bits gets reset after board reset after applying auto recovery. However not all failure conditions can be recovered automatically. If the RAID status bits are not printed as [2/2] [UU], recovery is incomplete. Follow the recovery steps listed Will the flash failure have any operation impact? No, But system may not boot back up on a power failure. Startup configs will be lost as well. What's recommended for healthy running system from customer perspective in terms of monitoring and recovery?

Check the GOLD compact test status for any failures and attempt recovery as soon as the first flash part fails. Can I fix a failed eusb flash failure by doing an ISSU from the affected code to the fixed release?

ISSU will not fix failed eUSB. The best option is to run the recovery tool for single eusb failure on the sup or reload the sup incase of dual eusb failure. Once the issue is corrected then do the upgrade. The fix for helps corrects single eusb failure ONLY and it does so by by scanning the system at regular interval and attempts to reawaken inaccessible or read-only eUSB using the script. It is rare to see both eusb flash failure on the supervisor occurring simultaneously hence this workaround will be effective. How long does it take for the issue to reappear if you fix the flash failures using plugin or reload? Generally it is seen by a longer uptime.

This is not exactly quantified and can range from a year or longer. The bottom line is that the more stress on the eusb flash in terms of read writes, the higher the probability of the system running into this scenario. Show system internal raid shows the flash status twice in different sections.

Also these sections are not consistent The first section shows the current status and the second section shows the bootup status. The current status is what matters and it should always show as UU.

Long Term Solutions This defect has a workaround in 6.2(14), but the firmware fix was added to 6.2(16) and 7.2(x) and later. It is advisable to upgrade to a release with the firmware fix to completely resolve this issue.

If you are unable to upgrade to a fixed version of NXOS there are two possible solutions. Solution 1 is to run the flash recovery tool proactively every week using the scheduler.

The following scheduler configuratoin with the flash recovery tool in the bootflash: feature scheduler scheduler job name Flash_Job copy bootflash:/n7000-s2-flash-recovery-tool.10.0.2.gbin bootflash:/flash_recovery_tool_copy load bootflash:/flash_recovery_tool_copy exit scheduler schedule name Flash_Recovery job name Flash_Job time weekly 7 Notes: • The flash recovery needs to have the same name and be in the bootflash. • The 7 in the 'time weekly 7' configuration represents a day of the week, Saturday in this case. Sims 2 Pregnancy Clothes Mod.

• The maximum frequency that Cisco recommends running the flash recovery tool is once a week. Solution 2 is documented at the following.