IT技术及科技前沿

中文IT博客,为IT专业技术人员提供最全面的信息传播和服务

首页 新随笔 订阅 管理

This chapter contains release notes information and issues for Sun ZFS Storage 7320 appliance.

4.1 Latest Product Information

For latest information about Sun ZFS Storage 7320 appliance, including the product notes, see the following link:

http://wikis.sun.com/display/fishworks/documentation

4.2 Known Issues

It has the following tables:

Table 4-1 Network Datalink Modifications Do Not Rename Routes

Release Note

RN001

Related Bug IDs

6715567

The Configuration/Network view permits a wide variety of networking configuration changes on the Sun Storage system. One such change is taking an existing network interface and associating it with a different network datalink, effectively moving the interface's IP addresses to a different physical link (or links, in the case of an aggregation). In this scenario, the network routes associated with the original interface are automatically deleted, and must be re-added by the administrator to the new interface. In some situations this may imply loss of a path to particular hosts until those routes are restored.

Table 4-2 Data integrity and connectivity issues with NFS/RDMA

Release Note

RN003

Related Bug IDs

6879948, 6870155, 6977462, 6977463

The NFS/RDMA support is a preview for early adopters and should not be used on production systems. There are currently no supported clients. Linux clients still under development have experienced data validation errors and loss of access to NFS/RDMA shares on the appliance. Solaris Express clients may experience NFS error messages of the form IBT_ERROR_ACCESS_VIOLATION_CHAN or NFS Server not responding and/or may panic when attempting to access NFS/RDMA shares on the appliance.

Table 4-3 Network interfaces may fail to come up in large jumbogram configurations

Release Note

RN004

Related Bug IDs

6857490

In systems with large numbers of network interfaces using jumbo frames, some network interfaces may fail to come up due to hardware resource limitations. Such network interfaces will be unavailable, but will not be shown as faulted in the BUI or CLI. If this occurs, turn off jumbo frames on some of the network interfaces.

Table 4-4 Multi-pathed connectivity issues with SRP initiators

Release Note

RN005

Related Bug IDs

6908898, 6911881, 6920633, 6920730, 6920927, 6924447, 6924889, 6925603

In cluster configurations, Linux multi-path clients have experienced loss of access to shares on the appliance. If this happens, a new session or connection to the appliance may be required to resume I/O activity.

Table 4-5 Restoring configuration backups from 2009.Q3 may fail to completely restore networking configuration

Release Note

RN006

Related Bug IDs

6843910, 6885651, 6885741

Restoring a configuration backup taken while running 2009.Q3 may cause one or more faults to be reported indicating that datalinks or IP interfaces were not able to be restarted and have been placed into the maintenance state. This is because the 2009.Q3 backups are missing information needed to completely restore the dependencies between networking objects. If this happens, apply a trivial change to any networking object (such as changing the label) and commit the change, which will cause the appliance to rebuild the networking object dependencies and resolve the problem.

Table 4-6 Rolling back after storage reconfiguration results in faulted pools

Release Note

RN007

Related Bug IDs

6878243

Rolling back to a previous release after reconfiguring storage will result in pool(s) appearing to be faulted. These pools are those that existed when the rollback target release was in use, and are not the same pools that were configured using the more recent software. The software does not warn about this issue, and does not attempt to preserve pool configuration across rollback. To work around this issue, after rolling back, unconfigure the storage pool(s) and then import the pools you had created using the newer software. Note that this will not succeed if there was a pool format change between the rollback target release and the newer release under which the pools were created. If this is the case, an error will result on import and the only solution will be to perform the upgrade successfully. Therefore, in general, one best avoids this issue by not reconfiguring storage after an upgrade until the functionality of the new release has been validated.

Table 4-7 Reboot and takeover during upgrade

Release Note

RN008

Related Bug IDs

6904547, 6918932, 6925704

Rebooting or taking over from an appliance running 2010.Q1 or older software that is unpacking or applying a software update can leave the update subsystem in an inconsistent state on that storage controller and prevent successfully applying the update in the future. This issue has been addressed in 2010.Q3 software but may still be seen while upgrading from an older version. Therefore customers are advised to avoid rebooting or taking over from storage controllers that are unpacking or applying a software update until all storage controllers are running 2010.Q3 or newer software.

Table 4-8 Unanticipated error when cloning replicated projects with CIFS shares

Release Note

RN009

Related Bug IDs

6917160

When cloning replicated projects that are exported using the new "exported" property and shared via CIFS, you will see an error and the clone will fail. You can workaround this by unexporting the project or share or by unsharing it via CIFS before attempting to create the clone.

Table 4-9 Some FC paths may not be rediscovered after takeover/failback

Release Note

RN010

Related Bug IDs

6920713

After a takeover and subsequent failback of shared storage, Qlogic FC HBAs on Windows 2008 will occasionally not rediscover all paths. When observed in lab conditions, at least one path was always rediscovered. Moreover, when this did occur the path was always rediscovered upon initiator reboot. Other HBAs on Windows 2008 and Qlogic HBAs on other platforms do not exhibit this problem.

Table 4-10 Hang during upgrade from 2009.Q3 and older releases

Release Note

RN012

Related Bug IDs

6903154, 6904955

Due to a bug that existed in all software releases prior to 2010.Q1, there is a possibility that the appliance-initiated reboot that occurs as the final step in upgrade may be impeded. If experiencing this bug, the appliance being upgraded will hang in perpetuity after emitting the "Installing grub on /dev/rdsk/..." message. In this case, the reboot cannot be automatically initiated, but there is no other ill effect; the appliance may simply be reset via the service processor. If experiencing this issue, the system will indicate that it was successfully updated after the reset.

Table 4-11 Multiple attempts to disable remote replication service hang management interface

Release Note

RN015

Related Bug IDs

6969007

Multiple consecutive attempts to disable remote replication may render the appliance management stack unusable, requiring a maintenance system restart to recover. To avoid this problem, do not attempt to disable remote replication more than once.

Table 4-12 Chassis service LED is not always illuminated in response to hardware faults

Release Note

RN017

Related Bug IDs

6956136

In some situations the chassis service LED on the controller will not be illuminated following a failure condition. Notification of the failure via the user interface, alerts including email, syslog, and SNMP if configured, and Oracle Automatic Service Request ("Phone Home") will function normally.

Table 4-13 HCA port may be reported as down

Release Note

RN019

Related Bug IDs

6978400

HCA ports may be reported as down after reboot. If the overlaid datalinks and interfaces are functioning, this state is incorrect.

Table 4-14 Incorrect mainboard fault diagnosis with some products

Release Note

RN021

Related Bug IDs

6983675

The 7320 product contains active PCI Express riser cards that, as with any active component, can fail. With current system firmware and software, if a riser card fails, the appliance will diagnose the fault but will attribute it to the controller mainboard rather than the riser card. If a mainboard fault is diagnosed, contact your service provider. Service providers are advised to consult the appropriate knowledge article for details on how to refine this diagnosis and replace the correct component(s) should such a fault occur.

Table 4-15 Nearly full storage pool impairs performance and manageability

Release Note

RN022

Related Bug IDs

6525233, 6975500, 6978596

Storage pools at more than 80% capacity may experience degraded I/O performance, especially when performing write operations. This degradation can become severe when the pool exceeds 90% full and can result in impaired manageability as the free space available in the storage pool approaches zero. This impairment may include very lengthy boot times, slow BUI/CLI operation, management hangs, inability to cancel an in-progress scrub, and very lengthy or indefinite delays while restarting services such as NFS and SMB. Best practices, as described in the product documentation, call for expanding available storage or deleting unneeded data when a storage pool approaches these thresholds. Storage pool consumption can be tracked via the BUI or CLI; refer to the product documentation for details.

Table 4-16 Share-level snapshot inventory not updated following project-level snapshot rename

Release Note

RN024

Related Bug IDs

6982952

Following a project-level snapshot rename using the CLI, the share-level snapshot context's snapshot inventory is not updated to reflect the rename. Selecting a snapshot from the inventory may result in an error message of the form The action could not be completed because the target no longer exists on the system. It may have been destroyed or renamed by another user, or the current bookmark may be stale. This can be worked around by logging out and back in using the CLI or using the BUI.

Table 4-17 Management UI hangs on takeover or management restart with thousands of shares or LUNs

Release Note

RN025

Related Bug IDs

6980997, 6979837

When a cluster takeover occurs or the management subsystem is restarted either following an internal error or via the maintenance system restart CLI command, management functionality may hang in the presence of thousands of shares or LUNs. The likelihood of this is increased if the controller is under heavy I/O load. The threshold at which this occurs will vary with load and system model and configuration; smaller systems such as the 7110 and 7120 may hit these limits at lower levels than controllers with more CPUs and DRAM, which can support more shares and LUNs and greater loads. Best practices include testing cluster takeover and failback times under realistic workloads prior to placing the system into production. If you have a very large number of shares or LUNs, avoid restarting the management subsystem unless directed to do so by your service provider.

Table 4-18 Moving shares between projects can disrupt client I/O

Release Note

RN026

Related Bug IDs

6979504

When moving a share from one project to another, client I/O may be interrupted. Do not move shares between projects while client I/O is under way unless the client-side application is known to be resilient to temporary interruptions of this type.

Table 4-19 Shadow migration hangs management UI when source has thousands of files in the root directory

Release Note

RN027

Related Bug IDs

6967206, 6976109

Shadow migration sources containing many thousands of files in the root directory will take many minutes to migrate, and portions of the appliance management UI may be unusable until migration completes. This problem will be exacerbated if the source is particularly slow or the target system is also under heavy load. If this problem is encountered, do not reboot the controller or restart the management subsystem; instead, wait for migration to complete. When planning shadow migration, avoid this filesystem layout if possible; placing the files in a single subdirectory beneath the root or migrating from a higher-level share on the source will avoid the problem.

Table 4-20 Shadow migration does not report certain errors at the filesystem root

Release Note

RN028

Related Bug IDs

6890508

Errors migrating files at the root of the source filesystem may not be visible to the administrator. Migration will be reported "in progress" but no progress is made. This may occur when attempting to migrate large files via NFSv2; instead, use NFSv3 or later when large files are present.

Table 4-21 Repair of faulted pool does not trigger sharing

Release Note

RN029

Related Bug IDs

6975228

When a faulted pool is repaired, the shares and LUNs on the pool are not automatically made available to clients. There are two main ways to enter this state:

  • Booting the appliance with storage enclosures disconnected, powered off, or missing disks

  • Performing a cluster takeover at a time when some or all of the storage enclosures and/or disks making up one or more pool were detached from the surviving controller or powered off

When the missing devices become available, controllers with SAS-1 storage subsystems will automatically repair the affected storage pools. Controllers with SAS-2 storage subsystems will not; the administrator must repair the storage pool resource using the resource management CLI or BUI functionality. See product documentation for details. In neither case, however, will the repair of the storage pool cause the shares and LUNs to become available. To work around this issue, restart the management subsystem on the affected controller using the maintenance system restart command in the CLI. This is applicable ONLY following repair of a faulted pool as described above.

Table 4-22 DFS root creation fails with fully-qualified appliance hostname

Release Note

RN031

Related Bug IDs

6961767

Creating a DFS root on an appliance SMB share may fail if the fully-qualified DNS name of the appliance is used. This is due to a missing interoperability feature in the SMB stack and can be worked around by using the appliance's unqualified name when creating the DFS root.

Table 4-23 NDMP service may enter the maintenance state when changing properties

Release Note

RN032

Related Bug IDs

6979723

When changing properties of the NDMP service, it may enter the maintenance state due to a timeout. This will be reflected in the NDMP service log with an entry of the form stop method timed out. If this occurs, restart the NDMP service as described in the product documentation. The changes made to service properties will be preserved and do not need to be made again.

Table 4-24 Spurious errors on upgrade from 2009.Q3 when shadow migrating zero-length files

Release Note

RN033

Related Bug IDs

6976648

When upgrading from 2009.Q3, the presence of zero-length files in a shadow migration target filesystem will trigger spurious background shadow migration errors of the form Unknown error 22 for each such file. These errors are harmless and may safely be ignored. This problem does not occur when upgrading from 2010.Q1 or any other previous software release, only 2009.Q3. If you are seeing errors of this type under other circumstances, contact your service provider.

Table 4-25 Suboptimal PCIe link training

Release Note

RN035

Related Bug IDs

6979482

PCI Express 2.0 compatible cards may train to PCI Express 1.X speeds. This may impact performance of I/O through the affected card. Detection software for this condition runs during boot and sends a fault message observable through the Maintenance fault logs. To recover from this condition, the system must be rebooted.

Table 4-26 Configuring InfiniBand datalinks under load may cause them to be marked faulted

Release Note

RN036

Related Bug IDs

6987187

If the InfiniBand datalinks are under extreme I/O load, then attempting to reconfigure them may cause one or more of those datalinks to be marked faulted. If this occurs, either the system must be rebooted or the faulted datalinks (and any IP interfaces built on top of them) must be destroyed and recreated.

posted on 2010-12-10 17:02  孟和2012  阅读(539)  评论(0编辑  收藏  举报