Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block...

24
Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

Transcript of Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block...

Page 1: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

Page 2: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager

Contents

3 Executive summary

3 Introduction

4 Primary solution goals

4 Goals of additional extensions

5 Overview of primary approach

5 The mirror solution

13 Extensions

14 Efficient use of data at design centers

19 Conclusion

20 Appendix – TCL code

Page 3: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager3

IntroductionDesign teams use DesignSync to manage reusable blocks of intellectual property data (called IP Blocks or Blocks) between various design centers. In the course of product development, teams produce thousands of IP Blocks and configurations of Blocks. After each Block is designed, tested and stabilized, it is released for consumption by other design teams throughout the enterprise. Several methods allow a company to optimize DesignSync for fast and efficient management and delivery of Blocks across numerous networks at multiple sites within the enterprise. This paper outlines these methods using several components of DesignSync: modules, mirrors, triggers, back references, file caches and module caches.

This solution document is intended for design methodologists, project leaders and IP development teams responsible for ensuring optimal reuse of their corporate IP assets. It is valid on DesignSync releases V6R2011 and later. The section on “Auto-Generated Mirrors” is valid on release V6R2012 and later.

Executive summaryA key contributor to competitive advantage and cost control in the semiconductor market is the effective and efficient management of complex production data. This data, or intellectual property (IP), is best utilized when managed in small, reusable sets that can be shared by multiple teams across various projects, ensuring quality control and eliminating replication of effort. This paper describes how to customize the ENOVIA Synchronicity DesignSync Data Manager (DesignSync) to act as a distribution system for IP by providing centralized storage, version control and resource management through efficient distribution processes.

Design Server

IP Block ProducersIP Block Consumers

IP Block Repository

Server

IP Block 1 Workarea

Design 1 Workarea

Design 2 Workarea

Design 3 Workarea

IP Block 2 Workarea

IP Block 3 Workarea

Design Server

Design Server

IP Block Repository

Server

Design Server

Figure 1

Page 4: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 4

Goals of additional extensionsThe goals of the available extensions to the primary solution are to:

● Allow multiple released versions of an IP Block to be available to support several ongoing designs.

● Automatically update a Block when changes are released and remove older versions no longer in use by any designs.

● Ensure optimal reuse of data files unchanged between versions of the Block by eliminating multiple, redundant copies of the same file on the LAN.

● Allow selected people at a design center to be notified when new Blocks are released and provide an efficient method of pulling the Block if they choose to use it.

● Allow a design center to access “work-in-progress” versions of an IP Block that is under development.

Primary solution goalsThe goals of the primary solution are to:

● Make accessible to every design center the latest released versions of relevant IP Blocks. As updates to Blocks are released, automatically push the new releases to the design centers utilizing the Blocks.

● Incur minimal overhead to set up the multi-site IP Block release distribution system.

● Allow each design center to determine which IP Blocks are available at their site.

● Achieve network and server efficiency that allows massive volumes of Blocks to be distributed without disruption to operations.

● Allow creation and distribution of Blocks that are a hierarchy of other Blocks. The report of hierarchical Blocks is a common requirement. See Figure 2.

Design 1 Workarea

Populate

Populate

Populate

Populate

IP Producer 2 Server

IP Producer 1 Server

Design 2 Workarea

IP1_Block1IP1_Block2IP1_SubA IP1_SubC

IP2_Block2

IP1_Block2IP1_Block3 IP1_Block4IP1_SubA IP1_SubB

IP2_Block1IP2_SubBIP2_SubCIP2_SubD

IP2_SubA IP2_SubBIP2_SubCIP2_SubDIP2_SubE

IP1_Block1 IP1_Block2 IP1_Block3 IP1_Block4 IP1_Block5

Figure 3Figure 2

IP2_Block2; 1.1

file symlinks

file symlinks

mcache symlinks

IP2_Block2

IP2_Block2; 1.2

IP2_Block2; 1.3

Module CacheConsumer areas

File Cache

File1;1.1

File2;1.1

File2;1.2

Dir/File3;1.1

Dir/File3;1.2

Design 1 Workarea 1

IP2_Block2;1.2

Design 1 Workarea 2

IP2_Block2;1.3

Design 1 Workarea 3

IP2_Block2;1.1File1;1.1->1.2File2;1.1Dir/File3;1.1

ContentsFile1:1.1File2:1.1Dir/File3;1.1

ContentsFile1:1.1File2:1.2Dir/File3;1.1

ContentsFile1:1.1File2:1.2Dir/File3;1.2

IP2_Block1 IP2_Block2 IP2_Block3

IP2_SubA IP2_SubBIP2_SubC

Page 5: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager5

Overview of primary approachUsemodulesfortheIPBlocksAn ideal way to begin is to manage each Block’s data as a module. A Block’s data files are individually revision-controlled and a collection of data files is revision-controlled into module versions. As with individual file versions, a module version can be branched to offer significant control in managing the collection of files that make up a Block.

Modules have the additional advantage of allowing the directory structure comprising the Block’s data to change over time. Fetching the older version of a module recreates the directory structure defined at the time the older module version was created. This “directory versioning” capability is available only with modules and not with DesignSync’s management of files-based data.

Finally, modules allow for one Block to reference and use any number of sub-Blocks through hierarchical references (hrefs). Care must be taken upfront in configuring the way hrefs connect and reference their sub-Blocks to prevent attempts to fetch multiple versions of the same Block to the same directory location. For more detail, refer to the section titled, “Create top-level container module(s) to identify the Blocks used by each design center”.

The remainder of this paper assumes the reader is familiar with the Modules capability of DesignSync. This overall capability is referred to as “Modules” while each collection of managed data is referred to as a “module”.

The mirror solutionUsemirrorstoupdatethemodulesatremotedesigncentersThe mirror “push” system is an effective method to push updated versions of Blocks to the design centers by using a specific selector. If the selector is dynamic (e.g. a branch tag or a module version tag) then whenever that dynamic selector changes (by virtue of a new module version being checked into a branch or the tag moving to a new module version) the mirror system automatically fetches the module’s data to the design centers.

One hurdle with the mirror solution is the time required to oversee the mirror system when the number of Blocks being pulled gets large. For each new Block, the design center’s administrator must create a new mirror at their site to point to the Block’s module URL. When a Block is no longer needed, the administrator must remove its associated definitions and module data from the mirror. Finally, if the number of Blocks used by a design center is large, it can be challenging for the administrator to monitor, understand and act upon the status report routinely.

Each defined mirror has a corresponding entry on the central DesignSync server hosting the IP Blocks. The server must be capable of handling the processing burden of large numbers of mirrors created at multiple design centers, as the revision status of each block must be continuously monitored.

The solution outlined below overcomes these hurdles and burdens.

Page 6: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 6

The “showhrefs” command starts off walking down the module hierarchy with the –hrefmode command switch set to “normal” (this is the default for the command and thus isn’t specified on the command line). When the command dereferences the RELEASED tag, it finds a module version of the “core” module with that tag. It gets the lists of hrefs in core;RELEASED, which in this case is ram;RELEASED, cpu;RELEASED, rom;RELEASED, and cache;RELEASED. The “showhrefs” command is still in “normal” hrefmode as it starts descending into these submodules. Each RELEASED tag is resolved on the submodule. When the “populate” command reaches a submodule version via a tagged module version, it switches to an hrefmode of “static”. Every submodule from this point forward is found by the static module version stored in the href. Thus, cpu;RELEASED is a submodule reached via a version tag, so “populate” switches to the static hrefmode when it follows hrefs in cpu;RELEASE. The static version of alu used in cpu;RELEASED is 1.3 and the static version of fpu is 1.2. The static version of bslice used in alu;1.3 is bslice;1.2.

Note that if an href from a parent module is to a submodule’s branch tag, then when the “populate” command reaches that submodule, it does NOT switch hrefmode from “normal” to “static” — it stays in “normal” mode. That’s because of the dynamic way the module version uses the branch tag. It can change from moment to moment as new module versions are checked into the branch. If the href from a parent module is to a unique version ID of the submodule (such as an href to “alu;1.18.1.4”) then “populate” will switch to hrefmode “static” when processing any module hierarchy within that submodule.

IMPORTANT: This switching of hrefmodes is an important concept to learn and remember when dealing with module hierarchy.

SolutiondetailsLet’s use an example set of Blocks being hosted on SyncServer “iphost:80”. Some subset of Blocks is to be distributed and updated to two design centers. Figure 4 illustrates a “showmods” listing of all the modules on that server and a listing of two container modules (and their respective hierarchies) as shown via the DesignSync GUI modules hierarchy browser (invoked via Modules->Show->Module Hierarchy):

All of the Blocks/modules use a tag named “RELEASED” to mark the module version available to the design center. In this simplified setup, only one RELEASED module version exists per module. A more extensive setup of multiple released module versions is discussed in the section titled, “Tracking multiple versions of each Block”.

Several of the Blocks are defined by a hierarchy of submodules. Figure 5 illustrates a detailed listing of the module hierarchy making up the “cpu;RELEASED” module version. Core is built using ram, rom, cache, and cpu, while cpu uses alu and fpu, and finally alu uses bslice. The href from any parent module to one of its submodules uses the RELEASED tag as the selector.

Figure 5

Figure 4

Page 7: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager7

The “addhref” command provides a “-rootpath” switch that defines the path that should be prefix to a module’s name when auto-defining the relative path the submodule’s base directory has relating it to the parent module’s base directory. In Figure X, the rootpath of “..” is defined. Thus, relative paths for submodule base directories are ../stdcells, ../io, and ../cache. This results in the base directory for “container1” and the base directories for all of the submodules to be peers with each other within the same parent folder. Using “..” relative paths is a standard-use model for module hierarchies. The “-rootpath” switch just makes these relative paths easier to specify.

Here is a listing after recursively fetching “container1” and “container2”:

Createtop-levelcontainermodule(s)toidentifytheBlocksusedbyeachdesigncenterA “container” module likely has no specific data files associated with it. It just defines hrefs to each Block being used and needing auto-updating at a design center.

In the example data, there are two container modules: “container1” defines the set of released modules to keep updated at Design Center1; “container2” defines the set of released modules to keep updated at Design Center2.

There are nine hrefs defined for “container1”: to the RELEASED module version of alu, bslice, io, stdcells, fpu, ram, cpu, rom, and cache.

There are only three hrefs defined for “container2”: to the RELEASED module version of core, io, and stdcells.

Hrefs are created via the “addhref” command. DesignSync makes it very simple to add multiple hrefs to a module and only create a single module version. (In prior releases, a new module version would be created for each new module version added.) The easiest way to create multiple hrefs is to define a file with each line containing relevant information for each href to add. Here is the file used to create the hrefs in “container1” plus the invocation of the “addhref” command:

Figure 7

Figure 6

Page 8: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 8

The new mirror is not yet marked to be enabled. The following steps should be completed before enabling the mirror.

The “Mirror Directory” field doesn’t contain the base directory for the mirror itself. Instead it is set to the base directory to hold the container module being fetched because as “populate –recursive” is run by the Mirror Update Process (MUP), the submodules fetched with their “../<moduleName>” relative paths will create base directories for those modules alongside the container’s base directory. It’s desirable to have all of those modules within a single mirror workspace, especially if the mirror will be used as a module cache, or mcache.

Use–sharemodetoauto-fetchmoduledatafilesintothefilecacheBefore enabling the mirror, it is best to customize the Mirror Administration Server (MAS) to add the “-share” switch to “populate” when updating the modules in the mirror. The “–share” switch causes all module members’ files to be fetched into the DesignSync file cache (the cache’s location is defined at the time the DesignSync client applications are installed). The module members in the mirror then have symbolic links created to point to the files in the cache.

NOTE: There are several reasons to place module member files into the cache to optimize system performance:

● By default, the “populate” command uses its “-fromlocal” switch to first look into the file cache when searching for any version of a specific file. If the required version of the file is found, it’s fetched from the cache instead of the server. This minimizes network and server load, while reducing response time for users.

● A user may fetch an entire module using “populate –share” (on Unix only). This creates the module’s directory structure in the user’s workspace and creates symbolic links to each of the module members in the file cache. This process reduces the number of duplicate files in existence.

Only the hrefs in the container modules use “../<moduleName>” relative paths. Any of the IP Blocks that have module hierarchy within them (like “core” and “cpu”) use the default “./<moduleName>” relative path (relative to the parent module’s directory). This is because the submodules used within these IP Blocks are IP Blocks themselves (e.g. “alu” is used within “cpu”, but “alu” is an IP Block to be released, distributed and reused on its own). If “..” relative paths were used everywhere, then it is likely that “populate” would try to fetch the same submodule with different selectors to the same base directory location — an impermissible error. If using “..” relative paths everywhere was desired so all fetched modules would have peer base directories, then unique base directory names in the hrefs’ relative path attribute would be needed for each instance of a module fetched into the workspace.

Defineamirrortorecursivelyfetchthecontainermodule(s)toeachdesigncenterA container module can be created that defines a set of Blocks to fetch and keep updated at each design center. The “mirror create” command sets up a mirror to pull the container and fetch it recursively. This can also be done via the “Add Mirror” panel associated with the SyncServer. See Figure 8.

Figure 8

Page 9: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager9

Normally when mirrors perform a recursive “populate”, each submodule that’s fetched is automatically defined as its own mirror (called a “sub-mirror”). In the example, only 10 total modules would be fetched and 10 sub-mirrors defined. If 500 Blocks were being fetched, and five design centers had their own container and mirror setup to fetch 500 Blocks each, it could result in an explosion of 2,500 sub-mirrors being defined. Some customers envision needing more than 35,000 unique mirrors.

To avoid this problem, a registry setting allows the client-side Mirror Update Processes (MUPs) to disable the auto-create of sub-mirrors when fetching a module recursively. The registry setting has to be defined in the Mirror Administration Sever (MAS) PortRegistry.reg file:

HKEY_LOCAL_MACHINE\Software\ Synchronicity\General\Mirrors\Options\ ModuleRegisterSubmirrors=dword:0

Because no sub-mirrors are created when the registry key is set to “dword:0” this means that only five total mirrors would be defined for the example of 500 Blocks at five design centers. This drastically reduces the amount of overhead the SyncServer hosting the IP Blocks must perform for each revision-control operation against the set of defined mirrors.

Another registry key that should be set for your module mirrors will cause the “populate” command run by the mirror system on that MAS to always run with the “-incremental” switch on to optimize performance:

HKEY_LOCAL_MACHINE\Software\ Synchronicity\General\Mirrors\Options\ModuleIncrPopOnly=dword:1

Once all of the PortRegistry.ref customizations are made, the mirror can be enabled. This can be done via the “mirror enable” command or the View/Edit Mirrors panel.

● When the release distribution system is extended to allow for multiple versions of a Block to be mirrored at a design center, it is likely most of the member files in an instance of a Module version will remain static between versions. By fetching all of the module versions in “-share” mode, the files from each module version are placed into the file cache. Once a required file version is in the cache, any other module version using the file version gets only a symbolic link (or optionally, a hard link) to that file version. File system space is preserved because multiple copies of the same file aren’t kept in the different module versions. See Figure 3 for a pictorial representation of how symlinks help keep a single local copy of the managed files.

To add the “-share” switch to the “populate” command invoked by the MAS, update the following registry setting in the MAS’s PortRegistry.reg file:

HKEY_LOCAL_MACHINE\Software\ Synchronicity\General\Mirrors\MUP\PopulateOptions

The default vault of this registry key is “-force -get -retain”, but additional command switches such as “-share” can be added.

More efficient use of file system resources is possible by having “populate” create “hard” links instead of “soft” symbolic links. This approach is discussed later as an extension to the basic approach.

RegistrysettingsforperformanceoptimizationThe “Add Mirror” panel is set to recursively update the container module using the selector of “Trunk:Latest”. Thus, as new versions of the container module are checked in (likely with changes to hrefs referring to the Blocks to be updated to a design center) the new versions of the container module are automatically fetched into the mirror directory.

Page 10: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 10

Each parent level is indented as the backref chain is traversed.

The line labeled “<== 1” shows that the container2;1.2 Block was reached. Going backwards though, the levels of indention show that container2;1.2 referenced core;1.6, which referenced cpu;1.4, which referenced alu;1.3, which referenced bslice;1.2 (= bslice;RELEASED).

The line labeled “<== 2” shows that container1;1.2 Block was reached. Going backwards though the levels of indention show that container1;1.2 referenced cpu;1.4, which referenced alu;1.3, which referenced bslice;1.2 (= bslice;RELEASED).

The lines labeled “<== 3” and “<== 4” show other ways that container1;1.2 was reached.

Server-sidetriggerThe last major component to the basic solution involves a server-side trigger tied to “tag” command events. This trigger is installed on the SyncServer hosting the IP Blocks. The trigger looks for “tag” events where the tag name matches a pre-defined naming convention (in the example, the tag must match the value REL*). Finding a matching tag, if the object is determined to be a module, then its “whereused” module hierarchy is obtained. If any one of the module’s parents match the pre-defined name of container modules (in the example, the container’s name must match the value container*), then the trigger adds a “tag transaction” to the mirror system. The added transaction is associated with the specific container module. When the mirror system sees the “tag” transaction, it matches the container module with the container’s mirror(s) and notifies every MAS with an associated mirror to update the container.

ExtendingthemirrorsystemtoupdateacontainerwhenIPBlockschangeA desired goal of the system is to have the mirror fetch the container module every time one of the referenced submodules of the container is released (i.e. the submodule’s “RELEASED” tag moves to a new version of the submodule). Simply defining a mirror and disabling auto-mirror creation are insufficient, because if a Block changes and its RELEASED tag moves to a new module version of that Block, the action fails to cause the container module to be modified. Since automatic sub-mirror creation was turned off, no setting yet exists on the hosting SyncServer indicating to the design center sites that their associated container needs to be updated. The solution to this situation makes use of two other DesignSync capabilities: “whereused” back references and a server-side trigger.

“Whereused”backreferencesEach time the “addhref” command is run to create a hierarchical reference from a parent module to a submodule, it stores a “back reference”, or backref, on the submodule. This backref states which parent modules contain hrefs to this submodule. The “whereused” command analyzes the backref pointers and reports the chain of parent modules. Figure 9 illustrates the recursive “whereused” output for the “bslice;RELEASED” Block. (The DesignSync GUI also has a graphical version of this output via the Modules->Show->Where Used menu item.) See Figure 9.

Figure 9

Page 11: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager11

Associated with this paper is a sample TCL file with the code to install as the server-side trigger. Customize the code to set up the pre-defined tag names and container module names. Place the trigger code file into the SYNC_CUSTOM_DIR/servers/<host>/<port>/share/tcl directory. Install the trigger via the Trigger->Add->Note Activity panel. See Figure 11.

This trigger is the key component to relieving unnecessary burden on the SyncServer of processing potentially thousands of mirror definitions for each revision-control action. The trigger only watches for and processes “tag” command events. If the tag isn’t associated with a module, or the tag doesn’t match the release Block naming convention, then the trigger simply exits.

To setup the server-side trigger, first enable “tag” revision control note creation on the SyncServer. This is done via the Administer Server->Server Settings panel. Select the RC Notes tab. See Figure 10.

Figure 10

Figure 11

Page 12: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 12

Picking up the new version of “ram” requires that a new version of “core” gets checked in and tagged as RELEASED. By checking in “core,” the “ci” command updates the static version of “ram” in the href pointing from “core” to “ram” (assuming that “core” and “ram” are in the same workspace when “core” is checked in).

To see that a new version of “core” needs to be checked in after “ram” was RELEASED, run the “showstatus –recursive” command on the workspace holding the Blocks under development. See Figure 13.

The “showstatus” output clearly shows that the “core” href to ram;RELEASED expected to find the RELEASED tag to resolve to version 1.2, but instead resolved to 1.3. This signifies that “core” needs check-in to synchronize the static version kept in its href to “ram” in order to match the version of “ram” in the workspace.

Simply checking in a new version of “core” fails to cause the necessary pull to the design centers. It must have its RELEASED tag moved to the new version just checked in. See Figure 14.

To take this example one more step: if a new version of the “alu” Block was RELEASED, then “cpu” would need to be checked in and tagged as RELEASED, as would “core”.

HandlingchangestoaBlockthathashierarchyThe basic solution previously outlined pushes a <Block>;RELEASED module to each design center for which a container module includes an href to the RELEASED module. If the Block has module hierarchy beneath it that is changing, updates to those Blocks must get automatically pushed to the design centers through the following process.

The container2 module shown in Figure 12 illustrates a module hierarchy.

The container module has only hrefs to core;RELEASED, io;RELEASED, and stdcells;RELEASED. When one of those three Blocks has its RELEASED tag moved to a new version of the module, that re-tagging causes the trigger to occur, which eventually causes the mirror to recursively fetch the container2 module.

Consider an instance in which a new version of “ram” was tagged as “RELEASED”. A hierarchical traversal starting from “core” would dereference the href to ram;RELEASED and see the new version of “ram” that’s been tagged. The mirror, however, runs “populate -recursive” starting from container2;Trunk:Latest. When the traversal hits core;RELEASED, it switches to static hrefmode. Once switched, it will always use the static version of “ram” kept in the href to “ram” (which, at this point, is still ram;1.2). Just tagging as “RELEASED” a new version of “ram” will NOT directly result in the “populate –recursive” of container2 to pick up ram;RELEASED.

Figure 13

Figure 12

Figure 14

Page 13: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager13

ExtensionsTrackingmultipleversionsofeachBlockIn the simple model already described, only one version of a released Block is referenced by the container and thus pulled to the design center, but there may be many different versions of a Block needed. Here are several ways in which the current mechanism could be extended to handle tracking of multiple versions of each Block:

● Adopt a convention by which the Blocks use a tag name to distinguish quality levels of each block. For example, tag names of GOLD, SILVER, and BRONZE are always used. This extends the one tag of RELEASED already discussed to three tags. When a new released version of a Block becomes available, the “tag –replace” command is run to move the BRONZE tag to the new released version. The BRONZE version is the most recently released version and is the most unstable. The module version previously marked as BRONZE now gets the SILVER tag. The module version previously marked as SILVER now gets the GOLD tag. The previous module marked as GOLD loses the GOLD tag. Perhaps it picks up some other tag name to denote it was a previously released version.

yy To extend the system to handle these multiple valid tags for each Block, the server-side trigger must be modified to extend the known tag names it looks for to include the tag names matching GOLD*, SILVER*, and BRONZE*.

yy In addition, three hrefs to <Block>;GOLD, <Block>;SILVER and <Block>;BRONZE must be added to the container module. The relative paths in each of the three hrefs need to resolve to a unique directory name in the mirror workspace where these three Block versions are populated. The “populate” command fails if it tries to fetch the same module with a different selector to the same workspace base directory. The relative path for each version of the same Block could be defined as “../<BlockName>/<Selector>” or some variation.

In the examples, if “ram”, “alu” or “cpu” have their RELEASED tag moved to a newer module version of those Blocks, that in turn causes the trigger to add a dummy tag transaction to “container1”. The design center mirror associated with “container1” will be updated with each new RELEASED version of the Block.

UsingthemirroredBlocksAs previously noted, the modules now undergoing automatic updates into the mirror cannot be used via “populate –mirror” because the –mirror switch is incompatible with modules. However, since the mirror was updated using “populate –share,” all module member files are in the file cache. Individual users wanting to make use of the modules in the cache can proceed in a number of ways:

● Fetch a module (and, optionally, its entire hierarchy) as read-only by using “populate –share <URL to module>”. The member files of each module fetched into the workspace will be symbolic links to the corresponding members in the file cache. Individual member files can then be fetched as a full copy (populate –get), and the members can be locked (populate –lock) to get an editable version of the file.

● Read-only reference copies of the IP Blocks are sitting in the mirror at the design center. They are not copied or used directly in user workspaces. Instead, the tools and design flow used at the design center expect to find the Blocks in the mirror directory with no further processing needed of them other than to get automatic updates via the mirror updating process.

● Better performance and more efficient use of system resources can be achieved if “hard” links and/or module caches (mcaches) are used. These are described as extensions.

Page 14: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 14

● Various server-side triggers with different logic to process the different containers could be defined for a given design center. Server-side triggers are controlled by the administrator for the SyncServer hosting the IP Blocks and there may be multiple SyncServers involved that host Blocks from many different design teams. Sophisticated processing could be envisioned to handle multiple release models and distribution mechanisms for the Blocks.

Efficient use of data at design centersThe primary solution calls for fetching the Blocks into the mirror with “populate –share” so that all data files are stored in the file cache and can be linked from users’ workspaces. Further optimizations can be made to file system resources and significantly increase performance when using these Blocks.

Usinghardlinksinsteadofsymlinks“Hard” links to the file cache can be created in a Unix workspace to replace “soft” symbolic links (symlinks). Symbolic links have the disadvantage of each using an inode (a file system resource), while “hard” links don’t use up another inode for each hard link pointing to the same file.

For example, if a file in the DesignSync file cache has three symlinks to it (one from a module in the mirror and two from user workspaces) then the file system uses four total inodes for that file (one for the actual file in the cache and one for each symlink). If DesignSync is instead configured to use hard links, only one inode would be used in our example. The file in the cache, the file in the mirror and the file in each of the two user workspaces all have direct references to the same file content managed on disk and known by the operating system. All of these direct references still only use one total inode for the file.

● Instead of using a dynamic tag name structure like GOLD, SILVER and BRONZE, each released version of a module could be assigned a unique tag that does not move when a new released version of a Block is created. An example of such tag names are REL1.0, REL2.0, and REL3.1.

yy This scenario can be cumbersome in that new hrefs must be routinely added to each container every time a new version of a Block is released. A better solution is to automate the adding of the href to the container, which can be achieved by refactoring the server-side trigger script. The script is written to still process each tag event on the server, but if the tag matches the desired pattern of REL*, each container module on the SyncServer has its hrefs searched (via “showhrefs -format list”). It is configured to look for an href to the module that was tagged (regardless of what selector the href used; it only matches on the base module name). If found, the script adds an href from the container to the newly released module version using the new tag name as the selector in the href definition. This will not solve the problem when a brand new Block is created and released, since an href from the container to the new Block does not yet exist. The trigger could create an href in the container to the newly released Block, but that may not be desirable since each container keeps only hrefs to the Blocks of interest by a design center. New Blocks may not be of interest to every design center, so new Blocks would likely require manually added hrefs from each container module.

yy As with the GOLD, SILVER, BRONZE extension, the relative paths defined in newly added hrefs must be unique to avoid overlap of the same Block with different selectors into the same base directory of the mirror workspace.

Page 15: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager15

DesignSync’s intelligence allows it to check that the workspace being utilized via “populate –share” is on the same file partition and mount point as the file cache. If it is, and the “Create hard links…” setting is turned on, then a hard link is created automatically. DesignSync also takes care of switching out that hard link for a full writable copy of the file if, for example, you lock the file for editing. If the workspace and the file cache are NOT on the same partition and mount point, then DesignSync falls back to the behavior of just creating a symlink, even if SyncAdmin says to create hard links. Note that hard link support is not turned on by default. There are more checks to be done within DesignSync to set up and manage hard links, so the choice is left to the administrator whether hard links should be the default behavior.

Using SyncAdmin will define the hard link registry keys in the SiteRegistry.reg so creating hard links would apply to all clients spawned by that DesignSync installation. The registry keys controlling hard link creation can be manually set in the PortRegistry.regor MirrorRegistry.reg. The keys to set are:

● HKEY_LOCAL_MACHINE\Software\Synchronicity\Client\Cache\PBFCEnabled=dword:1

● HKEY_LOCAL_MACHINE\Software\Synchronicity\Client\Cache\PBFCAllowUserToOwnFile=dword:1

The key requirement enabling the use of hard links is that all places using hard links must reside on the same file partition and under the same mount point. So, the highest possible efficiency to be gained in our example is when the file cache, the mirror workspace(s) and the user workspace(s) all exist on the same file partition and all use the same mount point (probably sitting on some large file server).

To make use of hard links, use SyncAdmin and go to the Site Options->Links panel. Then click on the “Create hard links for workspace to cached files” checkbox. See Figure 15.

Figure 15

Page 16: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 16

Using mcaches also requires some upfront considerations: ● If “populate” is recursively fetching a module hierarchy, then for a module in the hierarchy to match the “populate” search, the mcache module must have been fetched recursively.

● Any modules in the mcache that have submodule hierarchy should have that hierarchy fetched using “./<path>” based relative paths and not “../” relative paths. When creating mcache symlinks in the workspace, the “populate” command doesn’t care if the submodules are fetched using “../”-based relative paths for submodules because the command never reaches these submodules. As soon as it creates a symlink to a module in the mcache, “populate” finishes walking down that leg of a module hierarchy. Creating a peer-directory structure of module-base directories involved in a module hierarchy (by using “../” relative paths to the submodules in the hierarchy) will likely lead to problems using the mcache. The user’s workspace gets only one symbolic link to the top module of the hierarchy in the mcache. The submodules will likely not be reachable from the user’s workspace because they don’t reside beneath the base directory to which the symbolic link in the workspace points.

Settingupandusingamodulecache(mcache)Another way to improve the efficiency of automatically updating the Blocks at each design center is to set up the mirror as a module cache (mcache).

The primary advantage of using an mcache is that if “populate” sees the specific version of the module for which it is searching is already in the mcache, then a single symbolic link is created in the workspace and points to the module’s base directory in the mcache. The module in the mcache may have a huge number of member files, but they can all be accessed as though located in the workspace. This is because the workspace symlink acts as though the mcache module base directory is sitting in the workspace. Creating just one symlink in the user’s workspace to a module base directory will be much faster than creating symlinks (or hard links) or fetching a full copy of each member file of the module.

It is a simple task to set up the mirror as an mcache. If the mirror’s top level directory has “setroot” run against it, then it is already declared to be a workspace root directory, which is enough for it to hold metadata about all the modules fetched into the mirror. So, in general, any workspace can act as an mcache.

Use “populate –mcachepath” to have “populate” look for modules in an mcache or to search in a number of different mcaches. This option takes one or more pathnames to workspaces acting as mcaches. “Populate” looks for the specific module version it needs in the mcache(s) based on the module’s selector first being resolved on the server.

Page 17: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager17

To get the ultimate efficiency, set up the mirror to use “populate –share –mcachepath <mcacheDirectory>” and also set up the MAS to use hard links. The “-mcachepath” option allows symbolic links to mcached modules to be created in the mcache itself. All the unique versions of member files are in the file cache and the individual member files of every module are hard links to the member files in the file cache. This results in optimal usage of disk space and inodes in that only one inode is used for every unique member file version in the file cache. Minimal directory nodes exist in the mcache because of the use of symlinks to mcache module base directories (see sidebar). These settings might be very useful if a number of different module hierarchies are fetched into the mcache and many of the different submodules used in the hierarchies are the same version of the individual submodules.

● Most likely, the module data in the mcache will be available as read-only, whether because the mcache/mirror was setup in SUID mode or fetched by a different user. Therefore, not only are the module members’ files read-only, likely no additional data files can be created alongside the module member files in the mcache. So, the mcache may truly have to be a read-only reference snapshot of the Blocks. Of course, if the mcache/mirror was populated in “-share” mode, then the individual module member files are in the file cache, and the user’s workspace can be populated with the “-share” option to get symbolic links (or hard links) to the member files. This setup allows the user’s workspace to have additional data files created alongside the links to the module member files. Finally, the user’s workspace can be populated with the “-get” or “-lock” option to get full copies of the module member files. In this case, checking in the member files (if access controls permit it) would create a new module version on the repository SyncServer hosting the Block. The question of retagging the Block’s new module version then comes into play, that is, whether to release the changes just checked in or not.

HowoptimizationoccursbetweenthefilecacheandmcacheAt a design center, one person may use versions of Blocks A, B and C, while another uses versions of Blocks B, D and F. If both are using the same version of Block B, then an advantage is gained because the first user has already called Block B and caused it to be copied into the file cache, making it locally available to the second user.

The advantage occurs between the operations of the file cache and the mcache. The file cache contains copies of individual files. The mcache will have a container object (Module) representing a specific version of an object. A Module is an abstracted object, such as a Block. For example, there may be a Module representing the CPU object made of hundreds or thousands of files. The mcache can simply link to the specific versions of the files in the file cache that make up that version of the object, both reducing disk space usage and minimizing the time required to transfer the file between the server and the design center.

Page 18: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 18

Usingauto-generatedmirrorsWith the DesignSync “Auto-Generated Mirrors” extension (also known as “Scripted Mirrors”) the mirror system can automatically generate mirrors when certain criterion are met. The criterion is defined in a script and is completely customizable. A script that handles a majority of uses is provided with the release. Once the script starts an auto-generated mirror, the mirror fetches its required data. The mirror then stays active for a relatively short period (a day). If no other activity happens on the vault URL and selector associated with the generated mirror, the mirror gets disabled automatically. It then auto-generates again when the criterion is met. This system allows for the mirror system to handle a large amount of mirrors because only a small number are going to be active at any given time.

Here’s how auto-generated mirrors can be used to more simply solve the IP Block Distribution System outlined above:

● The Add Mirror panel has been extended to allow a vault selector to specify a wildcard expression when creating auto-generated mirrors (see item #1 in Figure 16). Thus, a selector of REL* might be specified for the URL of “sync://iphost:80/Modules”. This setup causes every module on iphost:80 containing a tag beginning with “REL” to potentially be mirrored. The TCL script supplied when creating the auto-generated mirror (item #2) is run by the MAS and decides whether to fetch each module or not.

yy If the tagging convention such as GOLD, SILVER and BRONZE is used, a different approach must be taken since no wildcard matches all three selectors. The selector might instead be “*” (all selectors matched) and the script then looks only for operations in which the GOLD, SILVER or BRONZE tag is applied, allowing those modules to be fetched. Another approach could be to create three different scripted mirrors: one using the GOLD selector, one using SILVER and one using BRONZE.

Accessingawork-in-progressBlockIf access to updates to an in-development Block is required, the simplest solution is to create a separate mirror to that work-in-progress Block and use the “<branch>:Latest” selector for the branch tracking the module’s changes. Every time a revision control operation happens on that Block, the mirror will pull the updates. This simple solution, however, could result in many work-in-progress Blocks to keep updated at each design center with a large volume of mirror definitions to track — an undesirable situation for reasons previously described.

A better approach is to create a combination of another container module with another server-side trigger. The container module would contain hrefs only to the work-in-progress Blocks (e.g. to each Block’s “<branch>:Latest” selector). The new server-side trigger would instead fire when a “checkin” event happens on a module (instead of the “tag” event). The trigger takes the incoming module and still follows the whereused backrefs to find associated container modules. The container module’s hrefs can then be examined (via “showhrefs –format list”) looking for an href with the “<branch>:Latest” selector to the module just checked in. If a matching href is found, then a “tag” transaction is added to the server’s mirror transaction log that causes the mirror to populate the container and fetch the checkins just made to the work-in-progress Block.

Page 19: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager19

ConclusionDesignSync offers several mechanisms that can be combined to create powerful data-management solutions needed by global design teams. The higher level of abstraction defined by modules and hrefs perfectly suits the Block-based design paradigm of many large integrated circuit (IC) and system-on-chip (SOC) designs. The mirror system is customizable so that large amounts of data can be replicated quickly and easily across geographically dispersed design centers. Having the design data updated locally, before users need it, aides quick, optimal access to this released design data. Finally, taking advantage of DesignSync’s file cache, symlinks, hard links and module caches can optimize the amount of system resources used for all of that data. In complex IC design processes, the ability to deploy a high performance, collaborative design environment that scales across multiple projects can be the difference between success and failure. DesignSync is the most advanced, proven solution to address the needs of multi-site, IP reuse design methodologies.

● The container module is not necessary to define the set of IP Block modules to pull to the design center. Instead, each design center creates their own file that lists out the IP Blocks to be fetched from the Repository Server. The TCL script reads this manifest file and determines if the module and its selector should be mirrored. The manifest file can be of any format desired. A separate “cron” process could run that programmatically determines which IP Blocks from the IP Repository are to be pulled to the design center, and refreshes the manifest file.

yy Another approach may still employ the use of a top-level module defining a module hierarchy in which the IP Blocks participate. The manifest file is then defined by running “showhrefs –rec” on this top-level module. It is not appropriate to have the TCL script run “showhrefs –rec” every time it executes to determine if modules should be fetched or not.

● The server-side trigger is not needed. Previously, this trigger looked for tag events and searched a module’s “whereused” hierarchy to determine if the container should be populated. Now the generated mirror’s TCL script reads the manifest file to determine which modules should be fetched and which can be skipped.

● No registry settings have to be altered to stop the container from creating sub-mirrors because there is no more container module.

● No manual registry settings have to be defined to tell the mirror system to run “populate –share” when the mirror runs. The Add Mirror panel now has the ability to define if the mirror should use “–share” or not (item #3 in Figure 16).

● The Add Mirror panel also allows easy setup of the mirror to use “hard” links instead of “soft” symlinks when it runs “populate –share” (item #4 in Figure 16).

Figure 16

Page 20: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 20

Appendix – TCL code## set up the trigger as## NAME: <desired name># Tcl File: <name of the file># place the file in custom/servers/<host>/<port>/share/tcl# # Active : yes# Command: ‘create’# Atomic: yes# Note Type: RevisionControl# ObjectType: ** ALL **#

namespace eval ::NotifyMAS {

#--------------------------------------------------------- # checkSanity: check that the required params are present #--------------------------------------------------------- proc checkSanity {} { if { ! [info exists ::SYNC_Parm(tag)] } { return 0 }

if { ! [releaseTag $::SYNC_Parm(tag)] } { return 0 }

if { ! [info exists ::SYNC_Parm(objects)] } { return 0 }

return 1 }

#------------------------------------------------------------ # getModUrl: Get the first object from the ‘objects’ list # On a tag op, this is a version url, get the # module url, by dropping the ;<rev> #------------------------------------------------------------

proc getModUrl {} { set mv [lindex $::SYNC_Parm(objects) 0]

# remove the host/port and version part set mp [url path $mv] set idx [string last “;” $mp] if { $idx > 0 } { incr idx -1 set mp [string range $mp 0 $idx] }

Page 21: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager21

set murl sync://$mp return $murl }

#------------------------------------------------------------ # isContainer: Check if given module url is a container #------------------------------------------------------------ proc isContainer {mod} {

###################################################### ###################################################### # CUSTOMIZE: this is where to customize the set of # modules that the set of submodules to distribute to # each remote site. ###################################################### ###################################################### set clist [list container*] set leafname [url leaf $mod] foreach m $clist { if { [string match $m $leafname] } { return 1 } } return 0 }

#----------------------------------------------------------------- # notifyCheckRecursive: Check the whereused by looking # up the back ref, do it # ecursively When a match against # container is found, add to array #-----------------------------------------------------------------

proc notifyCheckRecursive {mod} { # get the back refs, and check for macro top level # if found, then break out...else reach end

set backrefs “” if { [catch {url getprop $mod SyncBackRefs} backrefs] } { return }

foreach br $backrefs { if { [isContainer $br] } { # the mirror tag transaction needs a URL with a version # number, so just use a plain 1.1 version. it’s not # really impacting anything. set br “$br;1.1” set ::foundArray($br) 1 } else { # recurse on the back ref notifyCheckRecursive $br } } }

Page 22: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault SystèmesENOVIA Synchronicity DesignSync Data Manager 22

#-------------------------------------------------------------- # processFound: Process the array by # adding transactions # Add a tag transaction on the container # with the appropriate tag name #--------------------------------------------------------------

proc processFound {tagval} { foreach u [array names ::foundArray] { if { $::foundArray($u) } { # puts “_translog append [list $u “tag” $tagval 0 [list from trigger]]”

# this is the internal API to add a transaction to the Repository # Server’s MPD, and that gets pushed to the appropriate mirrors # registered against the URL. in this situation, we want to say # that the container module has been tagged so that it gets auto-populated # into the mirror (the container really wasn’t tagged). _translog append [list $u “tag” $tagval “” [list from trigger]] } } }

#--------------------------------------------------------- # releaseTag: Check if its a release tag #---------------------------------------------------------

proc releaseTag { tagval } { ###################################################### ###################################################### # CUSTOMIZE: This is where to define the set of tags # to look for to determine whether a submodule should # cause it’s “whereused” chain of backrefs to be # searched for a container module. ###################################################### ###################################################### set relTagsList [list REL*] foreach t $relTagsList { if { [string match $t $tagval] } { return 1 } } return 0 }

#--------------------------------------------------------- # isModule: check if module #--------------------------------------------------------- proc isModule {item} { set idx [string first /Modules/ $item] if { $idx < 0 } { return 0 } return 1 }

Page 23: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

© 2012 Dassault Systèmes ENOVIA Synchronicity DesignSync Data Manager23

#--------------------------------------------------------- # main: start and complete all the processing #---------------------------------------------------------

proc main {} { if { [checkSanity] } { catch { # the tag value set tagval $::SYNC_Parm(tag)

# is it a release tag if { ! [releaseTag $tagval] } { return }

# array to hold results array unset ::foundArray set ::foundArray(dummy) 0

# get the module url set modurl [getModUrl]

# is it a module if { ! [isModule $modurl] } { return }

# check back refs recursively notifyCheckRecursive $modurl

# process any found processFound 1,SyncBranch#$tagval

# puts [array get ::foundArray] } msg # puts $::errorInfo puts $msg } }}

::NotifyMAS::main

Page 24: Setting up a multi-site IP Block release distribution ... · Setting up a multi-site IP Block release distribution system using ENOVIA Synchronicity DesignSync Data Manager

WI-ESDDM 1205

DassaultySystèmes,ythey3DEXPERIENCEyCompany,yprovidesybusinessyandypeopleywithyvirtualuniversesytoyimagineysustainableyinnovations.yItsyworld-leadingysolutionsytransformytheywayproductsyareydesigned,yproduced,yandysupported.yDassaultySystèmes’ycollaborativeysolutionsfosterysocialyinnovation,yexpandingypossibilitiesyforytheyvirtualyworldytoyimproveytheyrealyworld.Theygroupybringsyvalueytoyovery150,000ycustomersyofyallysizesyinyallyindustriesyinymoreythany80countries.yForymoreyinformation,yvisitywww.3ds.com.

Visit us at3DS.COM

Delivering best-in-class products

Europe/Middle East/Africa DassaultySystèmesy10,yrueyMarcelyDassaultyCSy40501y78946yVélizy-VillacoublayyCedexyFrance

AmericasDassaultySystèmesy175yWymanyStreetyWaltham,yMassachusettsy02451-1223yUSA

Asia-Pacific DassaultySystèmesyPieryCityyShibaurayBldgy10Fy3-18-1yKaigan,yMinato-KuyTokyoy108-002yJapan

VirtualyProductyDesign

3DyforyProfessionals

RealisticySimulation

VirtualyProduction

© D

assa

ult S

ystè

mes

201

2, a

ll ri

ghts

rese

rved

. CA

TIA

, SO

LID

WO

RKS

, SIM

ULI

A, D

ELM

IA, E

NO

VIA

, EXA

LEA

D, 3

DSW

YM

and

3D

VIA

are

regi

ster

ed tr

adem

arks

of D

assa

ult S

ystè

mes

or i

ts s

ubsi

diar

ies

in th

e U

S an

d/or

oth

er c

ount

ries

.

GlobalyCollaborativeyLifecycleyManagement

InformationyIntelligence

SocialyInnovation

Onliney3DyLifelikeyExperiences