Thursday, March 22, 2007

Synchronous and Asynchronous Publishing

Web Publisher uses synchronous publishing by default, but for each site you have the option of choosing asynchronous publishing. If you want to specify asynchronous publishing, you do so in the site’s properties.

Synchronous publishing pushes content to a web server whenever it is promoted to the Staging or Active states. For example, if a file is promoted from WIP to Staging, it is published to the Staging web server.

When synchronous publishing is enabled, and when a file with a blank effective date is promoted to Approved, Web Publisher immediately promotes the file to Active. Upon promotion to Active, a file is immediately published, unless Web Publisher’s system settings are set to delay promotion to active.

Asynchronous publishing turns off the publish-on-promotion function when the associated publishing job is running at least every 15 minutes. Content is not published on promotion to Staging or Active when the job is running at that frequency or greater.

Web Publisher overrides asynchronous publishing and pushes content manually if a user previews WIP content as a web page but the content has not been published since last being modified. Specify asynchronous if jobs are running frequently. This speeds up the performance of Web Publisher.

Automatic asynchronous publishing is only available for the check in, import or add translation actions not in other views or customized actions. To modify Web Publisher for asynchronous publishing:
1. On your application server navigate to Documentum/config/WcmApplicationConfig.
properties.
2. Open the WcmApplicationConfig.properties file in a text editor.
3. Find the parameter: auto-publish=false and modify to auto-publish=true
4. Save and close the WcmApplicationConfig.properties file.
5. Restart your application server to pickup the change.

Asynchronous publishing pushes content to a Web server whenever the following actions are performed:
• import a content file
• check in a content file
• check in a Web Publisher Editor file
• import a file within Web Publisher Editor
• check in an eWebEditPro file (using the Save and Close options)
• add a translation to Web site using the Add Translation page

SCS will publish content to the Web site upon completion of one of these actions.
Publishing to the Web site may take several minutes depending on the number of SCS
jobs in the SCS server queue.

How can I remove a workflow that was associated in Web Publisher with a document that has been deleted?

Note: By Default when deleting a document that is associated with a workflow template within Web Publisher you should receive a message saying that the document could not be deleted as it's associated with a workflow.

However, it is possible that the document was deleted outside the web publisher environment possibly using the destroy API.

When you start a workflow within Web Publisher. Web Publisher creates a wcm_change_set object and a folder named supporting docs. The supporting docs folder and the document to be included in the workflow are linked to this change set.
A dm_note object is also created. The dm_note object corresponds to the message entered when starting the workflow. This dm_note object is linked to the supporting docs folder

So, assuming that you have only associated 1 document when starting the workflow. We can run a query to find all change sets associated that only have one object linked to it (by default you would have 2 objects linked to the change set, the Supporting docs folder and the document itself) . From this we can find the workflow associated with it.

Query:
select child_id,parent_id from dm_relation
where relation_name = 'wcm_process_workflow' and
parent_id in (select r_object_id from wcm_change_set where r_link_count = 1)

Note: The child_Id corresponds to the workflow id parent_id corresponds to the wcm_change_set object created

In order to remove the workflow you will then need to issue the Abort API for every child_id returned from the query.

To clean up you may also need to remove the wcm_change_set objects. As the wcm_change_set type is a sub type of dm_folder the r_folder_path will show you where this folder is located.

NOTE: before you remove the wcm_change_set objects you would first need to delete or re-link any objects that are currently linked to them.

How can I find rendered or even non-rendered document of a particular format?

Note: The queries below will help you find documents that have a rendition attached to it.

The i_rendition attribute of dmr_content can have the following values:

i_rendition = 0 --no rendition
i_rendition = 1 --server created
i_rendition = 2 --client created

This DQL returns content which is a rendition of msw8 (word) documents
DQL> select object_name, r_object_id from dm_document where r_object_id in (select parent_id from dmr_content where any i_rendition=2 and full_format = 'pdf');

object_name r_object_id
========================= ================

fed new doc 0904969c8000090f
(1 rows affected)

If I run the query below then you can see this still returns the document msw8. The reason is because you need to keep in mind that the pdf rendition that is created is another object, so you still have original msw8 object and the PDF is linked to it. For this reason the query above uses the full format PDF to check if there is any object of this format linked to a dm_document.

DQL> select object_name,r_object_id from dm_document where a_content_type='msw8'
and r_object_id in (select parent_id from dmr_content where any
i_rendition=0 and full_format = 'msw8');

object_name r_object_id
========================= ================

msw8.doc 0904969c80000145
fed new doc 0904969c8000090f
(2 rows affected)

If you wanted to do the opposite then you can run the following to find documents that do not have renditions linked.

select object_name, r_object_id from dm_document where a_content_type='msw8' and r_object_id not in
(select parent_id from dmr_content where any
i_rendition=2 and full_format = 'pdf');

Remember also that the 1st query will only return renditions (pdf documents) so linked to an msw8. If you would like to search for original documents that are PDF then you need to run the query below,

DQL> select object_name, r_object_id from dm_document where r_object_id in (select parent_id from dmr_content where any
i_rendition=0 and full_format = 'pdf');

object_name r_object_id
========================= ================

pdf template 0904969c8000015a
test 0904969c8000190e
(2 rows affected)

Troubleshoot Content Replication

When an object's content is stored in a filestore and is replicated to another filestore either by surrogate_get, manually, or scheduled content replication then prior to the replication, a dump of the content object must show the storage_id as being the distributed store. A dump of the dmi_replica_record will just show one component id for this content object.

After replication, a dump of the dmi_replica_record for this content object (data_ticket of content object is the same as the r_object_id_i of the dmi_replica_record) will show multiple component_ids (ids of the filestores) that it is stored in.
All the filestores, which are part of the distributed store will be listed as component_ids for the distributed store.


Useful information to obtain:

1. Get server.ini files from all locations

2. dump serverconfig objects
DQL: select r_object_id, r_object_id, object_name from dm_server_config
API> dump,c, for each serverconfig object

3. dump distributed store object

DQL: select r_object_id from dm_distributedstore
API> dump,c,

4. dump filestores in distributed store (component_ids from above output)

API> dump,c,

5. dump example object

API> dump,c,09001c808002ca6e

6. dump content of example object

API> dump,c,06001c808000af7a

7. Determine how many component IDs are associated with a replica - dump replica object for the replicated content object. Get the data_ticket from content object (previous dump) (data_ticket is -2147477618 in example below)

API> retrieve,c,dmi_replica_record where r_object_id_i = '-2147477618'
...
2d001c808000178e

API> dump,c,2d001c808000178e

After doing this you should be able to determine if the job is running successfully and if you are configured correctly. If it is not successfull then you need to look at the content replication job that is failing.

a) make sure content replication jobs were installed through the toolset.ebs
a) make sure user and password is on same line under arguments (for connect info to docbases).
b) trace job (10)
c) trace method

How do I determine the default value for an attribute of a type from the data dictionary?

1. Current "official" way is to run a query:

select r_object_id, attr_name, default_value from dmi_dd_attr_info where type_name = '' and business_policy_id = '0000000000000000' and nls_key = ''

or for a specific attribute:

select r_object_id, attr_name, default_value from dmi_dd_attr_info where type_name = '' and attr_name = ' and business_policy_id = '0000000000000000' and nls_key = ''

default_value returned from the query is an object ID pointing to an expression object, and from there, you can get the value of the expression_text.

For an example, check out ypli_type081502 type with attribute attr1 in qa_client2 docbase. . You'll see that the default_value returned is 53016e8d80003913. Dumping this ID will show you the structure of this object, including "expression_text". You probably can optimize by putting the two steps into one query.

2. Another method is to get to it through IDfSession.getTypeDescription.
This method is ok if you are not getting the default value info for a lot of attributes, or performance is not a main concern for the customer. One getTypeDescription call can add about 0.5 second of overhead.

IDfTypedObject tobj = IDfSession.getTypeDescription (, null, null); // just put null for policy and state parameters

// loop to get all default values (it is a repeating attribute)
{
IDfId defValueId = tobj.getRepeatingId ("default_value");
IDfPersistentObject pobj = session.getObject(defValueId);
String defaultValue = pobj.getString ("expression_text");
... // manipulate the default value
}

3. The quick and dirty way is:
create an object of that type (don't save), and retrieve the attribute value.
This is not a good idea for production code. Just ok for ad-hoc testing.

SAN Vs NAS

Storage solutions for your network: SAN vs. NAS

No matter how much storage you have, it never seems like enough. In a typical network, users store data either on a locally attached (SCSI, FireWire, USB) hard drive/array or store data on a centralized server. However, these solutions have performance and management related issues that make them inefficient and cumbersome leading system administrators to look for new solutions to this old problem. Network Attached Storage (NAS) and Storage Area Networks (SAN) are two solutions that promise to address these needs.

The problems with the old way

In the past when a user needed more storage space, the systems administrator may have simply added a second drive, or for a high-end user added a storage array. In environments where there was already an established infrastructure of centralized servers, adding more space means adding more disks to the server. These solutions have problems though.

When we add more storage to a local desktop, backups become more complicated. All essential data must be backed up on a regular basis to protect against data loss. If we have numerous machines on the network, each with 10s or 100s of GB of storage, backing up the data over the network is no small task. If we look at the most popular backup package on the Mac platform, Retrospect, we realize that it can only backup one client machine at a time. This means that we may have to deploy a large number of servers in order to service a small number of clients. Each server can only use locally attached tape drives to store the backup data, increasing the overall cost and management complexity of the backup solution. Since the relationship between clients and servers is essentially hard-coded for a backup run, if one server finishes, the locally attached tape drives on that server sit idle while another server is struggling to complete its backup script.

Another issue associated with locally attached storage is the inability to allocate space dynamically. If storage needs fluctuate based on project demands, an administrator may want to move a storage array or disk between users. Doing this with locally attached storage is cumbersome because it means some downtime for both users. Using a centralized server pool minimizes the problem slightly, but again moving disks between servers requires downtime that may now affect hundreds of users.

Finally, with locally attached storage the network becomes a bottleneck. With backups running across the network and users sharing large files by sending them across the network, the network quickly becomes overloaded at various times during the day. Network protocols (TCP/IP) are more efficient than in the past, but still not as efficient as storage protocols such as SCSI. TCP/IP has packet size limits that ultimately affect how fast large files can be streamed across the network.

**** NAS ****

The idea behind NAS is to optimize the traditional client/server network model. In a NAS environment, administrators deploy dedicated servers that perform only one function - serving files. These servers generally run some form of a highly optimized embedded OS designed for file sharing. The disks are locally attached to the NAS box, usually with high-performance SCSI, and clients connect to the NAS server just like a regular file server.

NAS servers often support multiple protocols - AppleTalk, SMB, and NFS - to make sharing files across platforms easier. Some NAS servers also allow for directly attached tape drives for local backups, but more often a centralized backup server is used, requiring that data be sent across the LAN. Since backups only occur between the NAS servers and the backup server though, many administrators opt to install a secondary LAN dedicated to backups (this requires the NAS server to support multiple NICs and multi-homing, a common function).

There are a lot of companies that make NAS solutions that support the Mac. The leader on the low-end is Quantum with their SNAP line of servers. With Mac OS X's support of NFS, the number of NAS solutions available to Mac users now includes market leaders like Network Appliance, Auspex, and Sun to name a few.

**** SAN ****

The idea behind a SAN is radically different that NAS. To begin with, different protocols are used. In most SAN implementations, Fibre-channel (FC) adapters provide physical connectivity between servers, disk arrays, and tape libraries. These adapters support transfer rates up to 1 GB/s (along with trunking available for faster connections) and generally use Fiber cabling to extend distances up to 10km. Fibre-channel uses the SCSI command set to handle communications between the computer and the disks.

A SAN essentially becomes a secondary LAN, dedicated to interconnecting computers and storage devices. To implement a SAN, the administrator installs a FC card in each computer that will participate in the SAN (desktop or server). The computer then connects to a Fibre-Channel switch (hub solutions are also available). The administrator also connects the storage arrays and tape libraries to the FC switch (converters available for SCSI arrays). To improve redundancy, a second FC card can be installed in each device and a fully meshed FC switch fabric built. The final step is installing any necessary software components for managing the SAN and allocating storage pools.

We now have a second network that allows all computers to communicate directly with all the disks and tape drives as if they were locally attached. If a computer needs more storage, the administrator simply allocates a disk or volume from a storage array. This also improves backup flexibility because tape drives can be dynamically allocated to servers as needed, ensuring efficient use of resources.

The SAN represents a second network that supplements your existing LAN. The advantage of a SAN is that SCSI is optimized for transferring large chunks of data across a reliable connection. Having a second network also off-loads much of the traffic from the LAN, freeing up capacity for other uses. The most significant effective is that backups no longer travel over the regular LAN and thus have no impact on network performance.

Since the disk arrays are no longer locally attached, we can also implement more advanced data recovery solutions. Mirroring is a popular technique for protecting against disk failure. Now we can mirror between two disk arrays located in two different locations. If one array dies, the server immediately switches to the remote mirror. If a server dies, we can bring a new server online, attach it to the FC fabric, and allocate the disks to it from the array. Server back online without moving the storage array.

In a typical SAN, each server or desktop is allocated a set of disks from the central pool and no other computer can access those disks. If the administrator wants to shuffle space, they take the disks away from one computer and assign them to another. Adding more disks is as simple as adding more disks to the array or adding another array and allocating the disks to the server. Recent software advances though have made sharing of filesystems a reality. Now two or more computers can access the same files on the same set of disks, with the software acting as the traffic cop to guard against corruption. This allows for even more efficient use of space because users no longer maintain duplicate data. This also improves the ability to build clusters or other fault-tolerant systems to support 24x7 operations.

The next generation of SAN products promise to move storage traffic back to traditional network protocols like TCP/IP. Why do this when FC and SCSI are more efficient for moving large chunks of data? Well, FC switches are expensive compared to Ethernet switches and while FC has a performance advantage today, 10 Gigabit Ethernet will allow TCP/IP to surpass FC in overall transfer speed despite the higher overhead in transmitting data. The other advantage is that Ethernet and TCP/IP are well understood by most administrators and a little easier to troubleshoot than FC. Most administrators will still build a second LAN for storage needs, but switch to more standard protocols.

Unfortunately, the SAN market for the Mac is not quite as broad as on other platforms, but there are a number of good solutions available. These include arrays and cards from ATTO, AC & NC, Medea, Rorke Data, CharisMac, and 3ware. The biggest stumbling block for Mac users is the lack of support from FC adapter vendors. Emulex and JNI are the standards in the industry, yet they still don't offer direct Mac support.

Both NAS and SAN solutions offer a lot of potential for solving the problems associated with traditional storage solutions. Both solutions offer administrators new options in building high-performance, high-availability networks. Interoperability is often the most difficult issue to overcome when implementing a SAN, so evaluate complete solutions to ensure that you are successful in your endeavors.

Why can't I start the docbase after migration from 4.2 to 5.2.x?

Scenario:
Migrating the docbase from 4.2 to 5.2.x, or using dump and load from 5.2. to 5.2 docbase where the target server already has docbases.

The error message is:
Failure to complete Crypto Objects initializatin" error

Resolutions:
Since each Content Server instance has one and ONLY one AEK.key then migrating a docbase or using dump and load to a target server which already has docbase, will cause the migrated docbase not to start because it is going to use different AEK.key.

Many support notes suggest to move the dbpassword.txt and then run dm_encrypt_password. This procedure would work if you are migrating the docbase to an empty docbase server, however, if this is not the case then you need to follow the resolution below:
1. Shutdown all the docbase on the target server
2. Backup the database
3. Rename the AEK file or rename it and move it some where else
4. From sql on the database update dm_docbase_config_s set i_crypto_key = ' '.
5. from sql: SQL> select r_object_id from dmi_vstamp_s where i_application = 'dm_docbase_config_crypto_key_init';
6. delete from dmi_object_type where r_object_id = 'returned r_object_id from above';
7. SQL> commit;
8. SQL> delete from dmi_vstamp_s where r_object_id = 'returned r_object_id from step above'
9. SQL> commit;
10. run this file from $DM_HOME/bin: dm_crypto_create
11. - To re-encrypt the dbpasswd.txt file do the following
cd $DM_HOME/bin
12. dm_encrypt_password -docbase -rdbms -encrypt
13. You need to do all modification on the database for each docbase
14. Startup the docbases.

SDS Setup

Though You need only one machine (where you have all the channels) on the SDS Source to make your publishes, Documentum recommends that you have 3 machines on the source side in the production (One master and 2 base Tuners). This will help you not to over load a single machine and also easy to maintain. A master Tuner will be like a repository where you put all the channels and subscribe only those channels which you need on the base tuner from this master. The base Tuner where your SDM is installed will become the Source of the actual publish data (which is your SCS target).

Here are the detaills of how you shoud ideally setup in a Production environment.


Starting 5.2 ContentCaster (this also implemented in 4.3.5), the architecture that documentum recommends as a significant change. This new architecture recommends 3 machines on the source side for Windows or 3 instances (with different ports) on the same machine on Unix. 1 Master & 2 Base Tuners. This note tries to explain the setup in a nutshell. You can use the Channel Manager to do all of your configurations. In case (on Unix flavours), if your Channel Manager doesn't come up, you can use the commands mentioned in this Note. I would strongly recommend to read the Install guide to know the Terminology used (eg. Master , Base tuner etc) and also to know the architecture.

Master Tuner:
===========
When you install Master Tuner I believe this comes with Channel Copier. Using this channel, copy all the other .car files provided in the download center (Transmitter, SDM, Replicator, DSMClient, SDMCmd)
Now Start the Tuner
If Channel manager is not coming up on Unix box because of xwindows not available, then you can use the following command to copy the channels.
navigate to the location where runchannel command exists and issue the following commands (runchannel will be available in the Tuner Workspace)

Syntax:
runchannel
http://localhost:5282/ChannelCopier -src location of the .car file -dst
http://localhost:5282/Channel Name

Commands:
runchannel
http://localhost:5282/ChannelCopier -src Transmitter.car -dst
http://localhost:5282/Transmitter
runchannel
http://localhost:5282/ChannelCopier -src SDMCmd.car -dst
http://localhost:5282/SDMCmd
runchannel
http://localhost:5282/ChannelCopier -src SDMClient.car -dst
http://localhost:5282/SDMClient
runchannel
http://localhost:5282/ChannelCopier -src SDM.car -dst
http://localhost:5282/SDM
runchannel
http://localhost:5282/ChannelCopier -src Rep.car -dst
http://localhost:5282/Rep

The above command should be in single line.
Once you copies all the required channels on the Master Tuner, setup the base Tuners.
Note: localhost in the above commands is basically the Master Tuner HostName. Since we are runnning the commands on the same machine I used localhost.

Base Tuner 1:
===========
When you install a base Tuner, There will be no other channels installed with this. You need to subscribe Transmitter & SDMClient Channels from the Master Tuner. You can do this using the following command.

Make sure you start the Base Tuner before subscribing the channels.

Syntax:
runchannel -subscribe
http://Master Tuner Hostname:5282/Channel Name

Commands:
runchannel -subscribe
http://Master Tuner Hostname:5282/Transmitter
runchannel -subscribe
http://Master Tuner Hostname:5282/SDMClient

Now your Transmitter & SDMClient channels are subscribed on the base tuner. Lets move on to Base Tuner2

Base Tuner 2:
=============
When you install a base Tuner, There will be no other channels installed with this. You need to subscribe SDM, Rep, SDMClient, SDMCmd Channels from the Master Tuner. You can do this using the following command.

Make sure you start the Base Tuner before subscribing the channels.

Syntax:
runchannel -subscribe
http://Master Tuner Hostname:5282/Channel Name

Commands:
runchannel -subscribe
http://Master Tuner Hostname:5282/SDM
runchannel -subscribe
http://Master Tuner Hostname:5282/Rep
runchannel -subscribe
http://Master Tuner Hostname:5282/SDMClient
runchannel -subscribe
http://Master Tuner Hostname:5282/SDMCmd

Now start the Base Tuner 1 and 2 . Now you should be able to access the Deployment Manager(just base tuner 2 is enough to start the Deployment manager). Now you can just run your deployments using the base tuners. Master Tuner is not required for running deployments. Master tuner will be used only during upgrades of channels or subscriptions of new version at the base tuner.

Target Base Tuner(s):
==============
When you install a base Tuner, There will be no other channels installed with this. You need to subscribe Rep, SDMClient Channels from the Master Tuner. You can do this using the following command.

Make sure you start the Base Tuner before subscribing the channels.

Syntax:
runchannel -subscribe
http://Master Tuner Hostname:5282/Channel Name

Commands:
runchannel -subscribe
http://Master Tuner Hostname:5282/Rep
runchannel -subscribe
http://Master Tuner Hostname:5282/SDMClient

Error: A new folder/cabinet must have an object_name

Note: Why do I get the error [DM_FOLDER_E_MUST_HAVE_NAME]error: A new folder/cabinet must have an object_name." when trying to create a new content or import a document?

Web Publisher will display the above error message, if the value of the user_os_name in the dm_user object is set to ‘null’.

1. Obtain a dump of the user object.
2. Verify to see if the user_os_name is in fact set to null
3. Update the value using API or DA (User Management)
4. Restart the Web Publisher session, and the problem should be fixed.fixed.

How to populate and publish the newly installed locale

Note: After installing different locales using the Language Pack Manager, populate and publish the newly installed locale to the Data Dictionary. To complete this, please follow the steps below:

1. Populating the new locale into the data dictionary
- In the following directory \product\5.2\bin\webPublisher\Utils\DataDictionary
- Edit the file ‘data_dictionary_wcm.ini’
- Uncomment the locale you like to populate, for example if you like to populate French, then uncomment
‘data_dictionary_wcm_fr_FR.txt’
- Save and close the file
- Edit dd_populate_wcm.bat or dd_populate_wcm.sh depending on your environment
- in the following line, replace , and with your own
information (for example: dmbasic -f%DM_HOME%\bin\dd_populate.ebs -eEntry_Point -- myTestDoc
dmadmin myPassword data_dictionary_wcm.ini)
- Save and exit file
- Execute the dd_populate_wcm.bat of dd_populate_wcm.sh (depending on your environment)
- Now you have populated the data dictionary with new locale, next we need to publish the data dictionary.

2. To publish the data dictionary
- Log into Documentum Administrator
- Traverse to ‘Job Management’ -> ‘Jobs’
- Choose dm_DataDictionaryPublisher, and ‘Tools’ ->‘Run’
- Finally, restart the application server, once the job has complete.ce the job has complete.

What is an LDIF format file?

*************************************************************************************

LDIF is an acronym of LDAP Data Interchange Format.

LDIF (LDAP Data Interchange Format) is the main file format used to import or export data from a global Lightweight Directory Access Protocol (LDAP) or an LDAP database. LDIF files are ASCII text files that represent data in a format that is recognizable to an LDAP directory, or database.

To create an LDIF format file, you can use a regular Notepad with (each line in your file ended with a carriage return) newline-delimited paragraph. One instance where the file format is used in Documentum is to import a text file to populate the user table. For details in how to import a text file, please refer to Documentum Server Administrator's Guide.

*************************************************************************************

Subject: How do I import users into Documentum Administrator using a script?


Note: The file that Documentum Administrator reads must be in LDIF format. The default values for new users are as follows:

Argument Default

user_os_name username
user_address username
privileges 0(None)
folder /username
group docu
client_capability 1

Refer to eContent Server reference manual, dm_user for client_capability values

There are no default values for acl_domain, acl_name, and user_os_domain.

You must use valid attribute names on the left side of the columns. You also must provide a valid acl_domain if you want to provide an acl_name, etc.

The file below adds the users test3 and test4.

object_type:dm_user
user_name:test3
user_group_name:docu
user_address:test3
user_os_domain:ts_server
user_privileges:8
acl_domain:nghi421_816
acl_name:Global User Default ACL
client_capability:8

object_type:dm_user
user_name:test4
user_group_name:docu
user_address:test4
user_os_domain:ts_server
user_privileges:8
acl_domain:nghi421_816
acl_name:Global User Default ACL
client_capability:8



Once you have created this LDIF file, do the following items:
-Log into the Documentum Administrator
-Click on users
-Click on Import
-Next to the "import file path" click on browse
-Locate the LDIP file that you created
-Click on "Import" at the bottom

Troubleshoot ORACLE tnsnames issues

*************************************************************************************

Subject: What causes the error "ORA-12154" when dmdbtest runs during dm_configure - using SQL*Net 2.?


Note: Check the following:

1. permissions on "listener.ora" and "tnsnames.ora"
2. check syntax on "listener.ora" and "tnsnames.ora"

If you are able to connect using SQL*Plus using the SQL*Net 2 connect string, but it fails with dmdbtest, then double check your permissions on those files.

12154, 00000, "TNS:could not resolve service name"
// *Cause: The service name specified is not defined correctly in the
// TNSNAMES.ORA file.
// *Action: Make the following checks and correct the error:
// - Verify that a TNSNAMES.ORA file exists and is in the proper
// place and accessible. See the operating system specific manual
// for details on the required name and location.
// - Check to see that the service name exists in one of the
// TNSNAMES.ORA files and add it if necessary.
// - Make sure there are no syntax errors anywhere in the file.
// Particularly look for unmatched parentheses or stray characters.
// Any error in a TNSNAMES.ORA file makes it unusable. See
// Chapter 4 in the SQL*Net V2 Administrator's Guide. If
// possible, regenerate the configuration files using the Oracle
// Network Manager.

*************************************************************************************

Subject: How to resolve Oracle "ORA-12545 TNS:name lookup failure" errors.


Note: Here is the Oracle description of this error:

12545, 00000, "Connect failed because target host or object does not exist"
// *Cause: The address specified is not valid, or the program being
// connected to does not exist.
// *Action: Ensure the ADDRESS parameters have been entered correctly; the
// most likely incorrect parameter is the node name. Ensure that the
// executable for the server exists (perhaps "oracle" is missing.)

Cause: A protocol specific address parameter connot be resolved.

If this message shows up in the docbase.log file and the docbroker.log file shows a nl_register 619 failure then check the /etc/services to to see what port the dmdocbroker is set to. The service file should contain the following:

dmdocbroker 1489/tcp #Documentum Docbroker "well known port"

After correcting this problem then try launching the docbroker again. If that suceeds then start the docbase again.

*************************************************************************************

Subject: Installation fails when 'Configure Docbase' attempts to connect to the Oracle database with error DM_SESSION_E_CANT_MAKE_TEMP_CONNECTION.


Note: Installation fails when 'Configure Docbase' attempts to connect to the Oracle database. Below is a list of possible problems which may be reported from the Oracle database. To troubleshoot the problem identify the error associated with the error and its possible resolutions.

DM_SESSION_E_CANT_MAKE_TEMP_CONNECTION error: Could not establish an internal, temporary database connection; database system error was: ORA-01017 invalid username/password; logon denied

01017, 00000, "invalid username/password; logon denied"
// *Cause:
// *Action:

Generally, this problem occurs when you are starting a docbase and it fails to connect due to an incorrect password or can not access the dbpasswd.txt file.

1. Verify the docbase owner specified in server.ini and the password specified in the dbpasswd.txt file is the correct database password. Both of the files are located in
DOCUMENTUM\dba\config\. Attempt to connect to Oracle using the name and password listed.

plus33 /@

Is your connect successful?

If your connect was not successful verify the Oracle password of the docbase owner or alter the password of the docbase owner by connecting to Oracle as the system user and issuing the following statement 'alter user identified by .

2. If your connect was successful then verify the path to the dbpasswd.txt file the server.ini file is correct and contains the proper password.

DM_SESSION_E_CANT_MAKE_TEMP_CONNECTION error: Could not establish an internal, temporary database connection; database system error was: ORA-01034 ORACLE not available

01034, 00000, "ORACLE not available"
// *Cause:
// *Action:

1. Verify that the OracleService has been started. You can do this from Control Panel/Services

2. Verify that the ORACLE_SID and ORACLE_HOME environmental variables are set and defined correctly in your TNS alias. Ensure that your TNS alias is configured properly.

Is the ORACLE_SID correct?
Is the host name correctly identified?
Is the protocol identified tcp/ip?
Are you spelling the TNS alias correctly in your connect string?

DM_SESSION_E_CANT_MAKE_TEMP_CONNECTION error: Could not establish an internal, temporary database connection; database system error was: ORA-12203: TNS:unable to connect to destination

12203, 00000, "TNS:unable to connect to destination"
// *Cause: Invalid TNS address supplied or destination is not listening.
// *Action: Ensure that the listener is running at the remote node, or ensure
// that the ADDRESS parameters specified are correct.

1. Verify that the OracleTNSListener Service has been started. You can do this from Control Panel/Services

2. Ensure that your TNS alias is configured properly. See section 1.4.4.5.5.

Is the ORACLE_SID correct?
Is the host name correctly identified?
Is the protocol identified tcp/ip?
Are you spelling the TNS alias correctly in your connect string?

3. Test your alias by connecting to Oracle with plus32/plus33.
plus33 @

4. Check the permissions on the tnsnames.ora file Everyone should have change permission.

5. Check the location of the tnsnames.ora file. It should be in the directory \ORANT\NETWORK\ADMIN.

DM_SESSION_E_CANT_MAKE_TEMP_CONNECTION error: Could not establish an internal, temporary database connection; database system error was: ORA 12154 TNS:could not resolve service name

12154, 00000, "TNS:could not resolve service name"
// *Cause: The service name specified is not defined correctly in the
// TNSNAMES.ORA file.
// *Action: Make the following checks and correct the error:
// - Verify that a TNSNAMES.ORA file exists and is in the proper
// place and accessible. See the operating system specific manual
// for details on the required name and location.
// - Check to see that the service name exists in one of the
// TNSNAMES.ORA files and add it if necessary.
// - Make sure there are no syntax errors anywhere in the file.
// Particularly look for unmatched parentheses or stray characters.
// Any error in a TNSNAMES.ORA file makes it unusable. See
// Chapter 4 in the SQL*Net V2 Administrator's Guide. If
// possible, regenerate the configuration files using the Oracle
// Network Manager.

1. Verify that the OracleTNSListener Service has been started. You can do this from Control Panel/Services

2. Ensure that your TNS alias is configured properly. See section 1.4.4.5.5.

Is the ORACLE_SID correct?
Is the host name correctly identified?
Is the protocol identified tcp/ip?
Are you spelling the TNS alias correctly in your connect string?

3. Test your alias by connecting to Oracle with plus32/plus33.
plus33 @

4. Check the permissions on the tnsnames.ora file Everyone should have change permission.

5. Check the location of the tnsnames.ora file. It should be in the directory \ORANT\NETWORK\ADMIN.

DM_SESSION_E_CANT_MAKE_TEMP_CONNECTION error: Could not establish an internal, temporary database connection; database system error was: ORA-12545 TNS:name lookup failure

12545, 00000, "TNS:name lookup failure"
// *Cause: A protocol specific ADDRESS parameter cannot be resolved.
// *Action: Ensure the ADDRESS parameters have been entered correctly;
// the most likely incorrect value is the node name.

Things to check for when getting this error are:

1.. Verify that the OracleTNSListener Service has been started. You can do this from Control Panel/Services

2. Ensure that your TNS alias is configured properly. See section 1.4.4.5.5.
Is the ORACLE_SID correct?
Is the host name correctly identified?
Is the protocol identified tcp/ip?
Are you spelling the TNS alias correctly in your connect string?

3. Is the host name and the computer name of the machine you are running on the same.
From the command prompt type in 'hostname' is the value returned the hostname you have identified in the TNS alias. hat is the Computer Name listed in Control Panel/Network/Idenfication. Does it agree with the host listed in your TNS alias.

What is the host name listed in Control Panel/ Network/ Protocols/TCP IP/DNS? Does it agree with the host listed in your TNS alias?

If there are inconsistencies this should be corrected. Please note hostname and computer name are NOT case sensitive.

*************************************************************************************

Subject: What to check if no docbases are able to start up. Dmdbtest showed that the error from Oracle was "12541."


Note: The Oracle error output is below. The problem was that someone had changed the permissions on the tnsnames.ora file so that Documentum couldn't access the file.
We changed the permissions and the docbases started.

12154, 00000, "TNS:could not resolve service name"
// *Cause: The service name specified is not defined correctly in the
// TNSNAMES.ORA file.
// *Action: Make the following checks and correct the error:
// - Verify that a TNSNAMES.ORA file exists and is in the proper
// place and accessible. See the operating system specific manual
// for details on the required name and location.
// - Check to see that the service name exists in one of the
// TNSNAMES.ORA files and add it if necessary.
// - Make sure there are no syntax errors anywhere in the file.
// Particularly look for unmatched parentheses or stray characters.
// Any error in a TNSNAMES.ORA file makes it unusable. See
// Chapter 4 in the SQL*Net V2 Administrator's Guide. If
// possible, regenerate the configuration files using the Oracle
// Network Manager.

*************************************************************************************

How do I get the complete comments of a workflow?

Note: In a workflow, a performer can enter comments while he/she is performing the task. And those comments are carried to the next performer in this workflow. When using WebPublisher workflow report to review the workflow instances, and in workflow history, you are only able to get the truncated comments back from WP's interface. Especially for those already aborted workflow instances, there is no way that you can get the complete comments back from WP's GUI. In some cases, those comments are very important to our customers and they need to find a way to get those comments back.

Here are the steps to get those comments back:

1) Identify the workflow from the dm_workflow table, get the r_object_id of the workflow:
select r_object_id, object_name from dm_workflow where object_name = 'your work flow name'

2) Identify the notes that are carried by this workflow:
select r_note_id from dmi_package where r_workflow_id = ' the object id of the workflow'

3) Get the content id of each of those note ids returned:
select r_object_id from dmr_content where any parent_id = 'the note id'

4) Go to DA, Administration->Job Management->Administration, use the "GET_PATH" method to find out the path of the files which stores the comments.

Following the above steps, you will be able to find all the comments carried by a specific workflow instance. One other way to get this information is to enable the session DMCL trace and look into the log file.

WP Rules and Relations

Subject: When using the Rules Editor to create a "New Rule", why can't I see all the Editor Rules folders?

Note: The "New Rule" action is running a query for category objects that may not match the object_name of the folders you might expect to see with folder view of the Template folders.

There are some details provided in the online Help (WebPublisher 442c) in the section, "Creating a New Rules File Using the Rule Editor".

Following the process for creating a new rules file using the Rule Editor, you select an unassociated WebPublisher Editor template file upon which to base the new rules file. The WebPublisher XML template must be already imported into the Docbase in order for you to create the rules file. Creating a rules file checks out and locks the WebPublisher Editor template file.

Please refer to the documentation (either online help of pdf docs) for more details on using the Rules Editor.

A bit more background and detail on what is happening when you click on "New Rule":

If you do not have a template already in place you will need to create and import your template file. It must have the category set. When you use the Web based Rules Editor to add a "New Rule" from the folder /WebPublisher Configuration/Supporting Templates/Editor Rules, the navigation will list the categories. The WebPublisher template file cannot be an html file. It must be an XML template file to use the Rules Editor.

If you look at the query that is run to display the categories in a folder view you will see a folder structure made up of wcm_category objects. These are not necessarily the same folder structure you will see if you navigate to the folder /WebPublisher Configuration/Supporting Templates/Editor Rules.

query traced in WebPublisher 4.4.2c:
API> execquery,s0,T,select upper(object_name), r_object_id, i_vstamp, r_object_type, i_is_reference, object_name, a_content_type, r_modify_date, r_lock_owner, r_link_cnt, object_name from wcm_category where
FOLDER('/WebPublisher Configuration/Content Templates')
AND a_is_hidden = false order by 1

NOTE if you perform an API dump for one of the objects returned by the above query, check that wcm_category object's value for "r_folder_path" It should indicate something like the following /WebPublisher Configuration/Content Templates/ where 'template-category' is the category value.

*************************************************************************************

Subject: How can I find the rules and presentation file associations for a given xml file in a WebPublisher cabinet?


Note: You can use the DMCL tracing tool to find out the DQL queries running for a particular action in WebPublisher.

To find out the query that runs behind the scenes to get the rules file and the presentation file of a file;
- Login to WebPublisher
- Start the DCML Browser Tracing tool (See Support 5023 for more details on DMCL Browser Tracing tool.)
- Locate and select the file.
- Click on Associate.
This is going to give you a page with the current associations of the selected file.
- Go to the Tracing tool and look for readquery to get the query which ran at the background.


The following DQL queries is pulled out from the trace tool as described above:

TO GET THE ASSOCIATED RULES FILE:

DQL> select r_object_id,description from dm_document d, dm_relation r
where d.i_chronicle_id_i = r.child_id and r.parent_id =
ID('') and relation_name ='wcm_rules_template';


TO GET THE ASSOCIATED PRESENTATION FILE:


DQL> select r_object_id,description from dm_document d, dm_relation r where
d.i_chronicle_id_i = r.child_id and r.parent_id =
ID('') and relation_name ='wcm_layout_template';

*************************************************************************************

Subject: Why might I see, ? UIMain.initializationFailure in the Rules Editor?


Note: When first invoking the rules editor you may experience this error.

One method to correct this issue is to add the local_path variable to a writable directory on the application server machine. The local_path attribute is set in the dmcl.ini file on the application server and should look something like:

local_path=c:\temp

This tells the Rules Editor to write to the browser file system to the C:\temp

This entry should be added to the [DMAPI_CONFIGURATION] section.

NOTE: adding local_path as

local_path=c:\temp\ does NOT WORK

It should also be noted attribute is not required and is simply being provided as an option.

*************************************************************************************

Subject: How do I update the integrity_kind value of a dm_relation type in WebPublisher?


Note: You may run into a situation where you receive an error similar to the following when deleting an object in WebPublisher:

"cannot delete object XXXX since dm_relation object(s) XXXXXXXXX
exists with integrity_kind set to 1."; ERRORCODE: 100; NEXT; null"

In order to delete the object, you can set the integrity kind for a relationship by running the following:

IAPI>retrive,c,dm_relation_type where relation_name=''
IAPI>set,c,l,integrity_kind
IAPI>0
IAPI>save,c,l

NOTE: See the attached document for list of dm_relation types in WebPublisher.


select DMOBJECTTYPE

from dm_attachments

where dmattachmentid= '40819'

and DMWWWCONTENTTYPE = 'text/plain'

and DMFILENAME = 'wprelations.txt'


DQL> select * from dm_relation_type where relation_name like 'wcm%';

relation_name

wcm_template_thumbnail
wcm_process_workflow
wcm_layout_template
wcm_rules_template
wcm_publishing_template
wcm_category
wcm_my_template
wcm_default_workflow
wcm_dynamic_content
wcm_doc_template
DQL>


Setting the integrity kind for a realtionship type

IAPI>retrive,c,dm_relation_type where relation_name=''
IAPI>set,c,l,integrity_kind
IAPI>0
IAPI>save,c,l

*************************************************************************************

How do I configure the "Housekeeping" DFC feature to delete downloaded files from checked out XML and Virtual Documents?

You can configure the "Housekeeping" feature by adding the string value "NumberOfDays" to the HKEY_LOCAL_MACHINE\Software\Documentum\Common\Housekeeping key. You may also have to create the "Housekeeping" key as well. This value basically tells DFC to clean up the files every X number of days.

How do I add a Workflow History to a Completed Package in WebPublisher?

Note: There is no built-in functionality for automatically recording workflow history, this tip illustrates one way to do it.

On completion of the workflow the name of the last action owner of each activity is added to attributes of the documents in the workflow package. In this example, the document attached to the workflow has attributes that can hold information about who created, designed, draft-checked and engineering-checked the document.

To use this example, do the following:

1. Create one workflow with at least 4 activities
"Approval"
"Design"
"Draft Check"
"Eng Check"

2. Create a new object type and add the following string attributes
"approved"
"created_by"
"checked"
"engineered"

3. Create a text file containing the method code. ( see below)

4. Create a method which runs this code. (see below)

5. At end of the workflow add one activity item, set it as an automatic running activity run by dm_docbse owner and add the method object described in this example.


method code
***********


' main code for workflow history.
' Create a text file and save it to c:\WF_History.txt.
' ****************************************************

Sub WF_History (ByVal stub1 As String, _
ByVal docbase As String, _
ByVal stub2 As String, _
ByVal user As String, _
ByVal stub3 As String, _
ByVal workItemId As String, _
ByVal stub4 As String, _
ByVal ticket As String, _
ByVal stub5 As String, _
ByVal mode As String)

On Error GoTo WF_History_Error

Dim clientX As Object 'dfclib.DfClientX
Dim client As Object 'IDfClient
Dim PersObject as Object 'IDfPersistentObject
Dim doc As Object
Dim docIdObj As Object
Dim docIdStr As String
Dim errorMsg as String
Dim packageCollection As Object
Dim session As Object 'IDfSession
Dim workItem As Object
Dim workItemIdObj As Object

Dim objID as String 'object as string
Dim SessID as String 'session id as string
Dim ObjName as String 'object name
Dim err_msg as String
Dim query as string
Dim workitem_id as string
Dim workflow_id as string
Dim process_id as string
Dim wf_activity_id as string
Dim index as integer
Dim activity_group (4,3) as string


'_____________________________________________
'
' WF History Variable setting
' This array holds the 4 phases of the workflow that we are going to
' hold hostory data for. It holds the names of the attributes where the
' user names will be written to, it also stores the user names to be written.
'
'_____________________________________________
activity_group(0,0) = "Approval"
activity_group(1,0) = "Design"
activity_group(2,0) = "Draft Check"
activity_group(3,0) = "Eng Check"

activity_group(0,1) = "approved"
activity_group(1,1) = "created_by"
activity_group(2,1) = "checked"
activity_group(3,1) = "engineered"

workitem_id = workItemId

'_____________________________________________
'
' Connection
'_____________________________________________
' Get a session.
errorMsg = "Connecting to docbase"
sessionId = dmAPIGet("connect," & docbase & "," & user & "," & ticket)
If Basic.OS = ebWin32 Then
Set clientX = CreateObject("Documentum.Dfc")
Else
Set clientX = CreateObject("java:com.documentum.com.DfClientX")
End If
Set client = clientX.getLocalClient()
Set session = client.adoptDMCLSession(sessionId)

'_____________________________________________
'
' Gets the process_id from the current
' workflow by using the workitem's ID supplied to the method.
'_____________________________________________
'
' get dm_process id from the active workitem
Set PersObject = session.newObject("dmi_workitem")
' this method returns a IDfPersistentObject of type dm_document)
query="dmi_workitem where r_object_id = '" & workitem_id & "'"
set PersObject = session.getObjectByQualification(query)
workflow_id = PersObject.getString("r_workflow_id")

query="dm_workflow where r_object_id = '" & workflow_id & "'"

set PersObject = session.getObjectByQualification(query)
process_id = PersObject.getString("process_id")


'_____________________________________________
'
' Get the user names of the users who last owned each activity in the workflow.
'
' Itterate through the workflow phases defined in the activity_group
' array and get the full name of the LAST PERSON WHO OWNED THAT ACTIVITY. This
' user name is written into element i,2 of the activity_group array.
'
' HOW DOES IT DO THAT?
' It looks in the repeating attribute r_act_name on the dm_process object for the
' value that matches the required activity. Using this repeating atribute index
' it can look in the matched pair attribute r_act_def_id for the object ID of the
' related workitem
' It then looks up the performer name from the workitem and gets the fullname
' from the dm_user record. Neat.
'_____________________________________________
'
for i = 0 to ubound (activity_group) -1
query="dm_process where r_object_id = '" & process_id & "'"
set PersObject = session.getObjectByQualification(query)
index = PersObject.findString("r_act_name", activity_group(i,0))
wf_activity_id = PersObject.getRepeatingString("r_act_def_id",index)

'get workitem id
query = "dmi_workitem where r_act_def_id = '" & wf_activity_id & _
"' order by r_creation_date DESC"
set PersObject = session.getObjectByQualification(query)

' Get the short user name from this workitem
activity_group(i,2) = PersObject.getString("r_performer_name")

' get the full user name
query = "dm_user where user_name = '" & activity_group(i,2) & "'"
set PersObject = session.getObjectByQualification(query)
activity_group(i,2) = PersObject.getString("description")

set PersObject = nothing
next i

'_____________________________________________
'
' Acquire workitem
'_____________________________________________
'
' Get work item id object.
errorMsg = "Getting work item id object"
Set workItemIdObj = clientX.getId(workItemId)

' Get work item object.
errorMsg = "Getting work item object"
Set workItem = session.GetObject(workItemIdObj)

' Acquire the work item only when mode = "0" for not restarting.
If mode = "0" Then
errorMsg = "Acquiring work item"
workItem.acquire
End If

'_____________________________________________
'
' Package Handling
' We are going to write the attribute info back to the Documents inside the
' package, so we need to get this out of the package.
'_____________________________________________
'
' Get the packaged document
errorMsg = "Getting packages from work item"
Set packageCollection = workItem.getPackages("")

While packageCollection.Next
objID = packageCollection.getString("r_component_id")
Set docIdObj = clientX.getId(objID)
Set doc = session.GetObject(docIdObj)

' Set object attributes for each document in this package.
errorMsg = "Set values " + objID

for i = 0 to ubound (activity_group) -1
ret= doc.setString (activity_group(i,1), activity_group(i,2))
next i

doc.save
Wend
packageCollection.Close



'_____________________________________________
'
' Completion of workitem
'_____________________________________________

' Complete the workitem. Assume there is
' only one output port for this task.

errorMsg = "Completing work item"
workItem.complete

set PersObject = nothing
set doc = nothing
set docIdObj = nothing
set clientX = nothing
set client = nothing
set session = nothing
set workItem = nothing
set workItemIdObj = nothing
set packageCollection = nothing

Exit Sub

WF_History_Error:

Print errorMsg
dmExit (100)

End Sub
'_____________________________________________


***********************************
Code to create server method
***********************************

'==================================
' dm_method creation script file.
'==================================

create,c,dm_method
set,c,l,object_name
wf_history_with_Initial
set,c,l,method_verb
dmbasic -eWF_History
set,c,l,run_as_server
T
set,c,l,use_method_content
T
set,c,l,a_special_app
Workflow
set,c,l,timeout_min
30
set,c,l,timeout_max
300
set,c,l,timeout_default
60
set,c,l,method_type
dmbasic
setfile,c,l,c:\WF_History.txt,crtext
save,c,l
getmessage,c
-------------------------------------

How do I run a transformation in WebPublisher via command line?

Note: You can run the transformation via command line from any host where DFC is installed. This should be run in the WebPublisher environment using the following:

java org.apache.xalan.xslt.Process -IN foo.xml -XSL foo.xsl -OUT foo.html

Invoke any files in its native application from java code

Runtime.getRuntime().exec("rundll32 url.dll,FileProtocolHandler " + file.getAbsolutePath());

Windows will try to start the appropriate program (whatever application is registered for the files). This works for all types of files, no matter what file format is.

Adding attributes to an Audit Trail

The following is the correct syntax and the only way to add attributes to an audit trail:

Syntax
dmAPIExec("audit,session[,object_id],event_name
[,audit_subtypes][,controlling_app]
[,policy_id[,state_name]][,sign_audit][,authentication]
[,event_description][,attribute_list]")


However, please note this will NOT work for the DM_ALL event:

Example: (the additional 6 blank spaces represent optional arguments from the syntax above)

API> audit,c,030058888000017d,dm_all,T,,,,,,,'acl_name'
...
[DM_AUDITTRAIL_E_AUDIT_ALL_NOT_ALLOW_EXTRA]error: "You are not allowed to specify extra values, for example, ',,,,,,'acl_name'', when auditing all events."


So use an event other like the following instead of dm_all:

API> audit,c,030058888000017d,dm_save,T,,,,,,,'acl_name'
...
OK

The file cannot be web viewed for the following reason: No publishing configuration exists for any of the cabinets in which this file is located

Follow the checklist below to resolve the above error:
1. Make sure the end-to-end test works for the webcache configuration (source to target)
2. Make sure that documents that are web viewable have locale (language) settings.
3. Make sure globalization is turned off.
4. Make sure there are some templates that have no locale settings.
5. Do a DMCL trace of the webview operation and make sure there is a query that returns no results:
select title, object_name, r_object_id from dm_webc_config c where is_active=1 AND any version_labels = 'WIP' and source_folder_id = '0c00b40b80044805' and c.object_name in ( select t.object_name from dm_webc_target t where t.object_name=c.object_name and t.language_code in (''))


If all 5 conditions above hold true, then do the following:
1. Turn on globalization in Web Publisher configuration.
2. For each template with no language code/locale setting, go to IAPI prompt on the eContect Server box and type:
retrieve,c,dm_document (template doctype) where object_name = 'Name of template'
set,c,l,language_code
->en_US (or other locale code)
3. Do the above for existing content that doesn't have a locale setting, or write a script to do so.

Create content with the modified templates and webview them.

How to publish the specific folder through API command?

API>apply,c,null,HTTP_POST,APP_SERVER_NAME,S,WebCache,SAVE_RESPONSE,I,-1,LAUNCH_ASYNC,B,F,TRACE_LAUNCH,B,T,TIME_OUT,I,120,ARGUMENTS,S,-docbase_name INTRANET -config_object_id [r_object_id of the specific scs configuration] -method_trace_level 10 -source_object_id [r_object_id of the specific folder]
...
q1



*********** Acquire the object ***************
API>next,c,q1
...
OK

*********** if result is 0, then Success ***************
API>dump,c,q1
...
USER ATTRIBUTES

result : 0
http_response_status : HTTP/1.1 200 OK
request_failed : F
response_doc_id : 0000000000000000
time_taken : 33
timed_out : F
time_out_length : 120
SYSTEM ATTRIBUTES
APPLICATION ATTRIBUTES
INTERNAL ATTRIBUTES



*********** if result is -1, then Failure ***************
API>dump,c,q1
...
USER ATTRIBUTES

result : -1
http_response_status :
request_failed : F
response_doc_id : 09015a2d80043b36
time_taken : 120
timed_out : T
time_out_length : 120
SYSTEM ATTRIBUTES
APPLICATION ATTRIBUTES
INTERNAL ATTRIBUTES

DQL Tips

-------------------------------------------------------------------------------------

*** get CURRENT TIME on server ***
select DATE(NOW) as systime from dm_server_config

-------------------------------------------------------------------------------------

*** get the implicit version label ***
select s.r_object_id,s.object_name,r.r_version_label from dm_sysobject s, dm_sysobject_r r where r.r_object_id = s.r_object_id and

r.i_position = -1

-------------------------------------------------------------------------------------

*** include r_object_id to remove blank rows

select object_name, r_object_id, a_expiration_date from dfas_common where folder('/Raj Srinivasan') and any a_expiration_date is not nulldate

-------------------------------------------------------------------------------------

*** select documents with pdf renditions (optimized for performance) ***
select * from dm_document where exists(select * from dmr_content where any parent_id=dm_document.r_object_id and full_format='pdf')

-------------------------------------------------------------------------------------

*** select all documents that have nulldate in repeating attribute
select object_name from dfas_common where folder('/Raj Srinivasan) and r_object_id NOT IN (select r_object_id from dfas_common where folder('/Raj Srinivasan') and any a_expiration_date is not nulldate)

-------------------------------------------------------------------------------------

*** select all documents that have nulldate in repeating attribute or a_expiration_date has past
select count(*) from dfas_common where folder('/content/Corporate Resources/Human Resources',descend) and (any a_expiration_date <

date(today) or (r_object_id NOT IN (select r_object_id from dfas_common where folder('/content/Corporate Resources/Human Resources',descend)

and any a_expiration_date is not nulldate)))

-------------------------------------------------------------------------------------

*** select folderpath with filename
select distinct s.object_name, fr.r_folder_path from dm_sysobject (all)
s,dm_sysobject_r sr,dm_folder_r fr where sr.i_position = -1 and
sr.r_object_id = s.r_object_id and fr.r_object_id = sr.i_folder_id and
fr.i_position = -1 and fr.r_folder_path like '/ChemNet/%'
order by fr.r_folder_path,s.object_name

-------------------------------------------------------------------------------------

*** empty folders ***
SELECT r_object_id,r_folder_path FROM dm_folder f WHERE r_object_id NOT IN
(SELECT distinct i_folder_id FROM dm_sysobject WHERE any i_folder_id = f.r_object_id and folder('/content',descend)) and folder('/content',descend)

-------------------------------------------------------------------------------------

*** NULL a_effective_date ***
select object_name from dfas_common where folder('/images', descend) and r_object_id NOT IN (select r_object_id from dfas_common where folder('/images', descend) and any a_effective_date is not nulldate)

-------------------------------------------------------------------------------------

*** select component from workitem ***
select
r_component_id
from
dmi_package p
where
exists (select r_object_id
from dmi_workitem w
where
w.r_object_id = and
w.r_workflow_id = p.r_workflow_id and
w.r_act_seq_no = p.r_act_seq_no)

-------------------------------------------------------------------------------------

*** select renditions ***
SELECT s.object_name, f.dos_extension
FROM dm_dbo.dm_sysobject_s s, dm_dbo.dmr_content_r c, dm_dbo.dm_format_s f
WHERE (c.parent_id = s.r_object_id)
and (c.i_format = f.r_object_id)
AND c.page=0 AND f.dos_extension <> 'xml'
AND folder('/whatever')

-------------------------------------------------------------------------------------

*** find workflow given document ***
select * from dm_workflow where r_object_id in (select r_workflow_id from dmi_package where any r_component_id in (select r_object_id from

dm_sysobject (all) where i_chronicle_id in (select i_chronicle_id from dm_sysobject where r_object_id=''))) and r_runtime_state='1'

-------------------------------------------------------------------------------------

*** return only inherited attributes ***
SELECT attr_name FROM dmi_dd_attr_info WHERE type_name = 'my_custom_type' AND attr_name NOT IN
(SELECT attr_name FROM dmi_dd_attr_info WHERE type_name = 'dm_document')

-------------------------------------------------------------------------------------

Documents that are uploaded in to docbase b/w Aug 2004 and Oct 2004

select a.r_object_id,a.r_creation_date,a.r_modify_date,b.user_group_name
from dm_document a, dm_user b
where (a.r_creation_date >= Date('08/01/2004','mm/dd/yyyy') and a.r_creation_date <= Date('10/31/2004','mm/dd/yyyy') ) and a.r_modifier = b.user_name

-------------------------------------------------------------------------------------

Documents and it's size

select a.object_name, b.content_size/1024 as "TotalSize_KB"
from dm_sysobject a, dmr_content b
where
any b.parent_id = a.r_object_id
and FOLDER('/ChemNet/Home',descend)
and full_format='aspx'
and rendition > 0

-------------------------------------------------------------------------------------

Subject: Why do we create four views for each subtype?


Note: For example the type dm_document has four views:

dm_document_sp
dm_document_sv
dm_document_rp
dm_document_rv

The two _sp and _rp views are used by DQL and the two
_sv and _rv views are used by the Object Manager.
Also, the _sp and _sv views are for single (non-repeating) attributes, while the _rv and _rp views are for repeating attibutes.

-------------------------------------------------------------------------------------

Label Text of the either Sytem Type or Custom Type

select label_text from dm_nls_dd_info where parent_id in
(select r_object_id from dm_aggr_domain where type_name = 'field_type')

Results:
Content Location
Keywords Category
Business Unit Owner

-------------------------------------------------------------------------------------

DQL to list all documents attributes and their associated folder path

select s.object_name, fr.r_folder_path
from dm_document s, dm_sysobject_r sr,
dm_folder_r fr
where s.i_is_deleted = 0
and sr.i_position = -1
and sr.r_object_id = s.r_object_id
and fr.r_object_id = sr.i_folder_id
and FOLDER('/ChemNet', descend)
and fr.r_folder_path like '/ChemNet/%'
order by fr.r_folder_path,s.object_name

-------------------------------------------------------------------------------------

*** Web cabinet of a content ***

SELECT object_name FROM dm_cabinet
WHERE r_object_id IN (SELECT i_cabinet_id FROM dm_folder
WHERE r_object_id IN (SELECT i_folder_id FROM dm_document (ALL)
WHERE object_name like 'test%')) AND r_object_type='wcm_channel'

OR

select r_object_id, object_name from wcm_channel where r_object_id in
(select i_ancestor_id from dm_folder where r_object_id in
(select i_folder_id from dm_sysobject where object_name = 'test'))

-------------------------------------------------------------------------------------

*** To get the first level folders from the cabinets ***


select object_name from dm_folder where any i_folder_id in (select r_object_id from dm_cabinet where object_name='Cabinets')

-------------------------------------------------------------------------------------
*** To show the previous Active content ***

select r_object_id,i_chronicle_id,DATETOSTRING(DATE(TODAY),'ddmmyyyy') as sysdate from dm_document (ALL) where object_name like 'test%' and

FOLDER('/ChemNet/ChemonicsProcess/Bidding') and
any r_version_label = 'Active'

-------------------------------------------------------------------------------------
To find the folders for particular group for specified permission

select for write object_name from dm_folder where acl_name IN (select object_name from dm_acl where any r_accessor_name = 'dm_world') and folder('/ChemNet',descend)

or

select object_name from dm_folder where acl_name IN (select object_name from dm_acl where any r_accessor_name = 'dm_world' and any r_accessor_permit = '7') and folder('/ChemNet',descend)

1 - NONE
2 - BROWSE
3 - READ
4 - RELATE
5 - VERSION
6 - WRITE
7 - DELTE

-------------------------------------------------------------------------------------

DFC to insert rows into registered table

public static IDfCollection execQuery(String queryString, IDfSession session)
throws DfException{
IDfCollection col = null; //Collection for the result
IDfClientX clientx = new DfClientX();
IDfQuery q = clientx.getQuery(); //Create query object
q.setDQL(queryString); //Give it the query

// example queryString: "insert into dm_dbo.your_registered_table
(field1, [field]) values(,[
col = q.execute(session, IDfQuery.DF_EXEC_QUERY);
System.out.println("Query executed.");
return col;
}

-------------------------------------------------------------------------------------

How to avoid redundancy for this simple query?

select distinct d.object_name,sys.authors from dm_document d, dm_sysobject_r sys
where folder ('/xyz/xyz_first') and
d.r_object_id = sys.r_object_id and
(sys.i_position=-1 or (sys.i_position<=-2 and sys.authors is not null))

-------------------------------------------------------------------------------------

Documentum Knowledgebase Blog

All,

I've created this blog to share my learnings and knowledge of Documentum stuff which could help others also for their work.


Regards,
Raj