arm-linux

http://armboard.taobao.com/

  博客园 :: 首页 :: 博问 :: 闪存 :: 新随笔 :: 联系 :: 订阅 订阅 :: 管理 ::
                        shark工作流引擎核心配置
注意:
    一些简单的英语,我没有给出翻译。因为我相信那是浪费时间。如果给你带来不便,敬请谅解!您可以告诉我哪些需要翻译,我可以为你翻译。
 
 
 
 
   
 
1.      什么是Enhydra Shark?
This is workflow engine completely based on WfMC and OMG specifications.
·         It is using WfMC's XML Process Definition Language (XPDL) as its native workflow definition format.
使用xpdl作为自己的工作流定义语言/格式。
·         In its standard kernel implementation, shark is a library which does not create its own threads, and it can be used in many different environments (from WEB application, from swing application, deployed as CORBA service, in EJB container, ...). There is a separate project of Swing admin application that uses shark directly through POJO interface.
在它的标准核心实现中,shark是一个不创建自己的线程的库。它能够被用在许多不同的环境中(包括web应用程序,swing应用程序,作为CORBA服务被部署,在EJB容器内等)。
====这表明,它是作为后台的业务逻辑service,而不涉及前台表现层。
有一个独立的项目:Swing的管理应用程序,它直接通过POJO接口使用shark。
·         It is very configurable, and all of its "internal" interfaces, as well as complete kernel could be replaced by another implementation.
它是高度可配置的,所有它的内部接口和完全的核心,都能够被其他实现所替换。
·         It can be used from many VMs simultaneously (in cluster scenario).
·         它能同时被许多虚拟机(客户端)使用(在集群部署中)。
·         It can be configured to use organizational structure defined on LDAP server (through the use of specific implementation of shark's UserGroup component)
·         它能够被配置,被使用在LDAP服务器上(通过使用shark的用户组组件的特殊实现)。
·         It does not use any XPDL's Extended Attributes for its execution rules.
·         它不使用任何xpdl的扩展属性作为它的执行规则。
·         It has full JTA support
·         他有完全的JTA跨事务支持。
·         It uses DODS (OR/M tool from Enhydra), which enables shark to use almost any DB system for storing information, and it can be easily configured to switch target DB vendor and/or url (it has predefined scripts, and means to automatically create appropriate tables in those DBs using Octopus - ETL tool from Enhydra)
·         他使用Enhydra开发的DODS这个O-R映射工具(类似于Hibernate).它能够使shark使用大多数的DB数据库管理系统来存储信息。它使更改目标数据库的配置非常简单。
他使用Enhydra开发的ETL数据抽取工具Octopus来帮助做到这一点。
·         It has implemented ToolAgent concept defined by WfMC to execute tools of automatic activities (several useful ToolAgents comes with shark)
·         他已经支持工具代理概念。他们是自动活动的执行工具。Shark有几个有用的工具代理。
·         Shark can use custom Java classes (and even interfaces or abstract classes) as process variables.
·         Shark 能够使用定制的Java类(甚至是接口或者抽象类)作为过程变量。
 
 
2.     启动Shark引擎(鲨鱼)
   
Shark can be started from a client application by configuring it first (which can be done in three different manners), and then by getting an instance of it. This is the most common way to use shark from an application:
在一个应用程序中使用shark的最普通的方法是:
 
String confFilePath="c:/Shark.conf";
Shark.configure(confFilePath);
Shark shark=Shark.getInstance();
Everything else can be done through the Shark interface.
Shark接口可以完成所有的工作流工作!
=========〉〉〉〉参考
sharkcommonapi.jar包内部
package org.enhydra.shark;
 
import java.io.File;
import java.lang.reflect.Method;
import java.util.Properties;
import org.enhydra.shark.api.RootError;
import org.enhydra.shark.api.client.wfmc.wapi.WAPI;
import org.enhydra.shark.api.client.wfservice.AdminMisc;
import org.enhydra.shark.api.client.wfservice.ExecutionAdministration;
import org.enhydra.shark.api.client.wfservice.PackageAdministration;
import org.enhydra.shark.api.client.wfservice.SharkConnection;
import org.enhydra.shark.api.client.wfservice.SharkInterface;
import org.enhydra.shark.api.client.wfservice.XPDLBrowser;
import org.enhydra.shark.api.common.ActivityFilterBuilder;
import org.enhydra.shark.api.common.AssignmentFilterBuilder;
import org.enhydra.shark.api.common.EventAuditFilterBuilder;
import org.enhydra.shark.api.common.ProcessFilterBuilder;
import org.enhydra.shark.api.common.ProcessMgrFilterBuilder;
import org.enhydra.shark.api.common.ResourceFilterBuilder;
 
publicfinalclass Shark
    implements SharkInterface
{
 
    publicstaticvoid configure(Properties props)
    {
        try
        {
            _conf(props);
        }
       catch(Exception e)
        {
            thrownew RootError(e);
        }
    }
 
    publicstaticvoid configure(String filePath)
    {
        try
        {
            _conf(filePath);
        }
        catch(Exception e)
        {
            thrownew RootError(e);
        }
    }
 
    publicstaticvoid configure(File configFile)
    {
        try
        {
            _conf(configFile);
        }
        catch(Exception e)
        {
            thrownew RootError(e);
        }
    }
 
    publicstaticvoid configure()
    {
        try
        {
            Class.forName(implClassName).getMethod("configure", newClass[0]).invoke(null, new Object[0]);
        }
        catch(Exception e)
        {
            thrownew RootError(e);
        }
    }
 
    publicstatic Shark getInstance()
    {
        if(null == theEngine)
            try
            {
                theEngine = (SharkInterface)Class.forName(implClassName).getDeclaredMethod("getInstance", newClass[0]).invoke(null, new Object[0]);
            }
            catch(Exception e)
            {
                thrownew RootError(e);
            }
        return me;
    }
 
    privatestaticvoid _conf(Object configuration)
        throws Exception
    {
        Class.forName(implClassName).getMethod("configure", newClass[] {
            configuration.getClass()
        }).invoke(null, new Object[] {
            configuration
        });
    }
 
    private Shark()
    {
    }
 
    public AdminMisc getAdminMisc()
    {
        return theEngine.getAdminMisc();
    }
 
    public ExecutionAdministration getExecutionAdministration()
    {
        return theEngine.getExecutionAdministration();
    }
 
    public PackageAdministration getPackageAdministration()
    {
        return theEngine.getPackageAdministration();
    }
 
    public SharkConnection getSharkConnection()
    {
        return theEngine.getSharkConnection();
    }
 
    public ActivityFilterBuilder getActivityFilterBuilder()
    {
        return theEngine.getActivityFilterBuilder();
    }
 
    public AssignmentFilterBuilder getAssignmentFilterBuilder()
    {
        return theEngine.getAssignmentFilterBuilder();
    }
 
    public EventAuditFilterBuilder getEventAuditFilterBuilder()
    {
        return theEngine.getEventAuditFilterBuilder();
    }
 
    public ProcessFilterBuilder getProcessFilterBuilder()
    {
        return theEngine.getProcessFilterBuilder();
    }
 
    public ProcessMgrFilterBuilder getProcessMgrFilterBuilder()
    {
        return theEngine.getProcessMgrFilterBuilder();
    }
 
    public ResourceFilterBuilder getResourceFilterBuilder()
    {
        return theEngine.getResourceFilterBuilder();
    }
 
    public XPDLBrowser getXPDLBrowser()
    {
        return theEngine.getXPDLBrowser();
    }
 
    public Properties getProperties()
    {
        return theEngine.getProperties();
    }
 
    public WAPI getWAPIConnection()
    {
        return theEngine.getWAPIConnection();
    }
 
    privatestatic SharkInterface theEngine;
    privatestatic String implClassName = "org.enhydra.shark.SharkEngineManager";
    privatestatic Shark me = new Shark();
 
}
 
 
=========〉〉〉
3.     配置shark
4种不同的方法配置shark引擎。
There are four different ways to configure shark:
1.    use configure () method without parameters:
使用无参配置方法
then shark is configured only from config file that is placed in its jar file. Shark that is configured in this way works with default settings, and without many internal API implementations (Caching, EventAudit, Logging, ...).
shark仅仅根据jar文件内里的配置文件被配置。通过这种方法,Shark使用默认方式被配置。没有许多内在的API实现(如:缓存,事件审计,日志等)
 
2.    use configure (String filePath) method:
使用配置文件(String文件路径)
it creates File object out of the path given in the filePath string, and calls the configure (File configFile) described next.
她根据参数创建文件对象,
3.    use configure (File configFile) method:
shark first does basic configuration based on properties given in its jar file, and then does additional configuration from the file specified. If the configuration File defines same properties as in default configuration file from the jar, these property's values will override the default ones, plus all additional properties from File/Properties will be added to shark configuration. The configuration files you are passing as a parameter actually does not need to define whole configuration, but they could just redefine some default configuration parameters (i.e. how to handle otherwise transition, to re-evaluate deadlines or not, to create default assignment, ...) and add some additional configuration parameters (i.e. AssignmentManagerClassName).
Shark首先基于他的jar文件中的属性文件,进行基本的配置。然后根据这个指定的文件进行附加的配置。如果2个配置文件的属性不同,则指定的配置文件的属性将覆盖jar中的配置文件的属性。
所有增加的属性也会被配置。
所以,指定的配置文件不需要进行完整的配置。但他们应该配置一些默认的配置参数。如:怎样处理其他事务,是否重新评估最后期限,创建默认的任务分派等。
增加一些额外的配置参数,如任务分派管理类名。
 
4.    use configure (Properties props) method:
it does basically the same as previous method (in fact, the previous method converts the file content into Properties object), but it offers the possibility for client applications to use Java Properties object to configure shark.
实际上,前面的file对象会被转换为属性对象。
you can use many shark instances configured differently (you just need to specify different config files/paths, or define different Property object). If you want to use several shark instances (from more than one VM) on the same DB, you should ALWAYS set the values for DODS cache sizes (must set it to zero), and CacheManagerClassName property should not exist).
    如果你需要在一个数据库上使用几个shark实例,缓存应该设为0,缓存管理器类名属性应该注释掉!
As already mentioned, shark is very configurable engine, and all of its components, including kernel, can be replaced by a custom implementation.
Shark的所有组件,包括核心,都能被自定义的实现替换掉。
The most common way for configuring shark is defining custom Shark.conf file, and here we will describe how you can configure shark, by briefly explaining the meaning of entries in standard Shark.conf file coming with shark distribution:
 
4.     简要介绍shark,conf这个shark引擎配置文件
Setting "enginename" parameter
You can set the name of shark instance by editing enginename property. Here is a part of configuration file for setting this property:
######################### NAME
# the name of shark instance
enginename=Shark
Can be used to identify shark instance (NOTE: in shark versions before 2.0 this parameter had also other meaning, and it was required to have different name for each shark instance).
Setting kernel behaviour in the case of unsatisfied split conditions
在不满意的分离条件下设置核心的行为
You can set the way how the standard shark kernel will react when the process has nowhere to go after an activity is finished, and all activity's outgoing transitions are unsatisfied (evaluated to false). Of course, this parameter has meaning only for the activities that have at least one outgoing transition.
Here is a part of configuration file for setting this property:
######################### KERNEL SETTING for UNSATISFIED SPLIT CONDITIONS
# There can be a cases when some activity that has outgoing transitions other
# then to itself (other then circular one), has nowhere to go based on
# calculation of these conditions (all of the conditions are evaluated to false)
# In that case, the process could hang (it will not go anywhere, and it will
# also not finish), finish (if there is no other active activities), or
# the last transaction that finishes the activity will be rolled back.
# This settings apply to the block activity's activities also, but the difference
# is that if you set parameter to FINISH_IF_POSSIBLE, shark will actually
# finish block activity if possible.
# The possible values for the entry are IGNORE, FINISH_IF_POSSIBLE and ROLLBACK,
# and default kernel behaviour is FINISH_IF_POSSIBLE
#SharkKernel.UnsatisfiedSplitConditionsHandling=FINISH_IF_POSSIBLE
So, there are three possible solutions as described, and the default one is to finish the process if possible.
Setting kernel to evaluate OTHERWISE conditions last
XPDL spec does not say that OTHERWISE transition should be executed only if no other transition condition is evaluated to true (in the case of XOR split). So, if you i.e. put OTHERWISE transition to be the first outgoing transition of some activity, other transition's condition won't be even considered.
You can configure shark to deviate from the spec, so that OTHERWISE transition is evaluated and executed only if no other transition condition is evaluated to true. To do that, you should set the following property to true.
SharkKernel.handleOtherwiseTransitionLast=false
    
This parameter could be saving lot of headaches to XPDL designers, by removing the extra care on OTHERWISE transition positioning.
Setting kernel for assignment creation
Determines if kernel will create assignments - default is true. There are situations when assignment creation is not necessary, and this is the case when you always execute activities directly using change_state() WfActivity method.
SharkKernel.createAssignments=true
Since this setting affects the complete engine, you should carefully consider if application you're accessing shark with allows you to change the default setting.
Setting kernel for default assignment creation
Determines if kernel will create default assignment for the process creator if assignment manager return zero assignments.
NOTE: if this property is set to true, there can be side-effect with Tool activities with Manual Start and Finish mode.
SharkKernel.createDefaultAssignment=true
Default kernel value is true.
Setting kernel for resource handling during assignment creation
Defines the limit number for loading all WfResources from DB before creating assignments.
When kernel determines that more assignments than the number specified by the limit should be created it will make a call to retrieve all WfResources from DB.
When DODS is used as a persistence layer, it can improve the performance if there are not too many WfResource objects in the system:
SharkKernel.LimitForRetrievingAllResourcesWhenCreatingAssignments=5
Default kernel value is 5.
Setting kernel behaviour to re-evaluate assignments at engine startup
It is possible to force kernel to re-evaluate assignments during shark initialization. This can be done by changing the following property:
#Assignments.InitialReevaluation=false
If you set this property to true, all not-accepted assignments are going to be re-evaluated (old ones will be deleted, and new ones will be created based on current mappings, current state of User/Group information and current implementation of AssignmentManager class).
Default kernel setting is not to re-evaluate assignments.
Setting kernel for assignment handling
Determines if kernel will delete other assignments from DB everytime when someone accepts/rejects assignment, and will re-evaluate assignments each time this happens. If it is set to true, the side-effect is that if there was reassignment, and the user that got this reassigned assignment rejects it, he will not get it afterwards.
SharkKernel.deleteOtherAssignments=true
The shark kernel default is true.
Setting kernel behaviour to fill the caches on startup
If you want shark to fill its Process and Resource caches at startup, you should edit the following entries from configuration file:
#Cache.InitProcessCacheString=*
#Cache.InitResourceCacheString=*
If you uncomment these lines, all processes and resources will be created based on DB data, and will be filled into cache (actually, this number is restricted by the cache size).
The value of these properties can be set as a comma separated list of the process/resource ids that need to be put into cache on engine start, e.g.: Cache.InitProcessCacheString=1_test_js_basic, 5_test_js_Game
Shark kernel default is not to initialize caches.
Setting kernel behaviour for reevaluating deadline limits
If you want shark not to reevaluate deadlines each time external deadline management checks for deadlines, you should set following entry to false (default kernel setting is true)
#Deadlines.reevaluateDeadlines=true
Determines if process or activity context will be used when re-evaluating deadlines Default kernel setting is activity context.
Deadlines.useProcessContext=false
Determines if asynchronous deadline should be raised only once, or every time when deadline check is performed. Default kernel setting is true (to raise deadline only once).
Deadlines.raiseAsyncDeadlineOnlyOnce=true
Setting kernel and event audit mgr for persisting old event audit data
Determines if old event audit data should be persisted or not. Default is to persist. The value of this property must be respected by both, the kernel, and event audit manager.
PERSIST_OLD_EVENT_AUDIT_DATA=true
    
Default kernel setting is true.
Setting kernel for the priority handling
Determines if it is allowed to set the priority of the WfProcess/WfActivity out of the range [1-5] as defined by OMG spec:
#SharkKernel.allowOutOfRangePriority=false
    
Default kernel setting is false.
Setting properties for browsing LDAP server
If you are using a LDAP server to hold your organization structure, you can configure shark to use our LDAP implementation of UserGroup and Authentication interface (it will be explained later in the text how to set it up), and then you MUST define some LDAP properties.
At the moment, shark implementations of UserGroup interfaces support two types of LDAP structures. The first structure is marked as type 0, and the second is marked as type 1. The LDAP structures are detailly explained in the document LDAP structures in Shark (html, pdf)
You can set this properties based on your LDAP server configuration, by changing the following part of configuration file:
######################### LDAP SETTINGS
# Shark can use LDAP implementation of Authentication and UserGroup interfaces,
# and these are settings required by these implementations to access and
# browse the LDAP server
LDAPHost=localhost
LDAPPort=389
# possible values for LDAPStructureType parameter are 0 and 1
# 0 is simple structure, the possibility that one group or user belongs to more
# than one group is not supported
# 1 is more complex structure that supports the possibility that one group or
# user belongs to more than one group is not supported
LDAPStructureType=1
LDAPSearchBase=
LDAPGroupObjectClasses=organizationalUnit
LDAPUserObjectClasses=inetOrgPerson
# parameter LDAPRelationObjectClasses is only needed for LDAPStructureType=1
LDAPRelationObjectClasses=groupOfNames
LDAPGroupUniqueAttributeName=ou
LDAPGroupDescriptionAttributeName=description
LDAPUserUniqueAttributeName=userid
# parameter LDAPRelationUniqueAttributeName is only needed for LDAPStructureType=1
LDAPRelationUniqueAttributeName=cn
# parameter LDAPRelationMemberAttributeName is only needed for LDAPStructureType=1
LDAPRelationMemberAttributeName=member
LDAPUserPasswordAttributeName=userpassword
LDAPUserRealNameAttributeName=cn
LDAPUserFirstNameAttributeName=givenName
LDAPUserLastNameAttributeName=sn
LDAPUserEmailAttributeName=mail
LDAPUser=sasaboy
LDAPPassword=s
# parameter LDAPGroupGroupsName is only needed for LDAPStructureType=1
LDAPGroupGroupsName=Groups
# parameter LDAPGroupUsersName is only needed for LDAPStructureType=1
LDAPGroupUsersName=Users
# parameter LDAPGroupGroupRelationsName is only needed for LDAPStructureType=1
LDAPGroupGroupRelationsName=GroupRelations
# parameter LDAPGroupUserRelationsName is only needed for LDAPStructureType=1
LDAPGroupUserRelationsName=UserRelations
·         LDAPHost - the address of the machine where LDAP server is running
·         LDAPPort - the port through which LDAP server can be accessed
·         LDAPStructureType - if set to 0, the simple structure is used in which the possibility that one group or user belongs to more than one group is not supported, if set to 1, the more complex structure is used which supports the possibility that one group or user belongs to more than one group is not supported
·         LDAPSearchBase - the name of the context or object to search (this is the root LDAP node where all queries will start at).
·         LDAPGroupObjectClasses - the comma separated list of LDAP object classes representing Group of users. It is important that these classes must have a mandatory attribute whose value uniquely identifies each entry throughout the LDAP tree.
·         LDAPUserObjectClasses - the comma separated list of LDAP object classes representing shark users. It is important that these classes must have a mandatory attribute whose value uniquely identifies each entry throughout the LDAP tree.
·         LDAPRelationObjectClasses - only used in structure type 1, the comma separated list of LDAP object classes representing relations between shark users and group or between shark groups. It is important that these classes must have a mandatory attribute whose value uniquely identifies each entry throughout the LDAP tree.
·         LDAPGroupUniqueAttributeName - the name of attribute that is mandatory for each LDAP object class representing Group of users. The value of this attribute MUST be unique for each LDAP entry for these object classes throught the LDAP tree.
·         LDAPGroupDescriptionAttributeName - the name of attribute of LDAP object classes representing Group of users that represents the Group description.
·         LDAPUserUniqueAttributeName - the name of attribute that is mandatory for each LDAP object class representing User. The value of this attribute MUST be unique for each LDAP entry for these object classes throughout the LDAP tree. When shark uses LDAP for authentication and user group management, this attribute represents the username for logging into shark.
·         LDAPRelationUniqueAttributeName - only used in structure type 1, the name of attribute that is mandatory for each LDAP object class representing Relation of groups or group and users. The value of this attribute MUST be unique for each LDAP entry for these object classes throught the LDAP tree
·         LDAPRelationMemberAttributeName - only used in structure type 1,the name of attribute of LDAP object classes (representing Relation of groups or group and users) that represents member that is included (user or group) in the relation.
·         LDAPPasswordAttributeName - the name of attribute that is mandatory for each LDAP object class representing User. When shark uses LDAP for authentication and user group management, this attribute represents the password needed for logging into shark.
·         LDAPUserRealNameAttributeName - the name of the attribute of LDAP object classes representing User, that represents the real name of the shark user.
·         LDAPUserFirstNameAttributeName - the name of the attribute of LDAP object classes representing User, that represents the first name of the shark user.
·         LDAPUserLastNameAttributeName - the name of the attribute of LDAP object classes representing User, that represents the last name of the shark user.
·         LDAPUserEmailAttributeName - the name of the attribute of LDAP object classes representing User, that represents user's email address.
·         LDAPUser - when LDAP server requires credentials for reading, this is the username that will be used when connecting LDAP server
·         LDAPPassword - when LDAP server requires credentials for reading, this is the password that will be used when connecting LDAP server
·         LDAPGroupGroupsName - only used in structure type 1, the name of the specific group that must be created and which will contain all groups
·         LDAPGroupUsersName - only used in structure type 1, the name of the specific group that must be created and which will contain all users
·         LDAPGroupGroupRelationsName - only used in structure type 1, the name of the specific group that must be created and which will contain all relations between groups
·         LDAPGroupUserRelationsName - only used in structure type 1, the name of the specific group that must be created and which will contain all relations between groups and users
Setting kernel's CallbackUtilities implementation class
If one wants to give its own implementation of CallbackUtilities interface, he can do it by changing the following attribute:
######################### CALLBACK UTILITIES
# used for logging, and getting the shark properties
# the default kernel setting is as follows
#CallbackUtilitiesClassName=org.enhydra.shark.CallbackUtil
The name of the class that is used by default is commented.
This interface implementation is passed to all internal interface implementations, and is used by those implementations to read shark property values, and to log events.
Setting kernel's ObjectFactory implementation class
If one wants to replace some parts of kernel with its own implementation (i.e. to replace WfActivityInternal, WfProcessInternal, ... implementations), he should create its own class based on this interface, and configure shark to use it.
This can be done by changing the following part of configuration file:
######################### OBJECT FACTORY
# the class name of the factory used to creating kernel objects
# the default kernel setting is as follows
#ObjectFactoryClassName=org.enhydra.shark.SharkObjectFactory
The name of the class that is used by default is commented.
Setting kernel's ToolActivityHandler implementation class
If one wants to set its own ToolActivityHandler implementation, that will communicate with tool agents in a different way than the standard implementation does, he can configure the following:
######################### TOOL ACTIVITY HANDLER
# the class name of the manager used to execute tool agents
# the default kernel setting is as follows
#ToolActivityHandlerClassName=org.enhydra.shark.StandardToolActivityHandler
The name of the class that is used by default is commented.
New components and their settings in Shark.conf
ExpressionBuilderManager is newly introduced component in Shark architecture. Its functionality simply didn't fit into any of previously existing. Expression builders provide an hierarchy of interfaces to ease the building of expressions for Wf*Iterators.
ExpressionBuilderManagerClassName=org.enhydra.shark.ExpressionBuilderMgr
    
Default setting provides DODS compatible implementation.
Database configuration
This section of configuration file is related to DODS implementation of persisting APIs.
In shark distribution, we provide SQL scripts for creating tables for the most DBs supported by DODS, and appropriate LoaderJob files that can be used by Octopus to create DB tables if providing appropriate drivers. This files can be found in conf/sql folder.
The default database used is HypersonicSQL, and settings for other DBs are commented, and this is the part of configuration file that shows this:
#
# Turn on/off debugging for transactions or queries. Valid values
# are "true" or "false".
#
DatabaseManager.Debug="false"
 
# Special settings for Postgresql DB
#DatabaseManager.ObjectIdColumnName=ObjectId
#DatabaseManager.VersionColumnName=ObjectVersion
 
#
# Maximum amount of time that a thread will wait for
# a connection from the connection pool before an
# exception is thrown. This will prevent possible dead
# locks. The time out is in milliseconds. If the
# time out is <= zero, the allocation of connections
# will wait indefinitely.
#
#DatabaseManager.DB.sharkdb.Connection.AllocationTimeout=10000
 
#
# Required for HSQL: column name NEXT must be used
# with table name prefix
# NOTE: When working with other DBs, you should comment these two properties
#
DatabaseManager.DB.sharkdb.ObjectId.NextWithPrefix = true
DatabaseManager.DB.sharkdb.Connection.ShutDownString = SHUTDOWN
 
#
# Used to log database (SQL) activity.
#
DatabaseManager.DB.sharkdb.Connection.Logging=false
 
There is another important DODS configuration aspect - the cache sizes:
#
# Default cache configuration
#
DatabaseManager.defaults.cache.maxCacheSize=100
DatabaseManager.defaults.cache.maxSimpleCacheSize=50
DatabaseManager.defaults.cache.maxComplexCacheSize=25
If you know that several instances of shark will be used in several VMs, using the same DB, you should set all this cache sizes to zero. Along with this, cache manager implementation (explained later in the text) should not be used.
Setting persitence components variable data model
Following options are described together, although they affect different components, because option's intention and the effect produced are the same.
Determines the maximum size of String that will be stored in VARCHAR field. String which size is greater than specified value will be stored as a BLOB. The maximumum size that can be set is 4000 (the default one)
DODSPersistentManager.maxVARCHARSize=4000
DODSEventAuditManager.maxVARCHARSize=4000
    
Determines which data model will be used for storing process and activity variables. There are two options:
1.    using standard data model, where all data types are in one table (including BLOB data type for persisting custom Java objects and large Strings
2.    using optional data model, where one table contains all data types except BLOB, and there is another table that references previous table, and is used only for storing BLOB information (for persisting custom Java objects and large Strings)
Default is to use standard data model, but using optional data model can improve performance in use cases where there are not so many custom Java objects and large String objects, and when shark and DODS caches are not used, and this is especially better choice if using Oracle DB.
DODSPersistentManager.useStandardVariableDataModel=true
DODSEventAuditManager.useStandardVariableDataModel=true
    
Setting Assignment manager implementation class
If one would like to create its own Assignment manager, which would decide which assignments are to be created for an activity, he can implement its own Assignment manager, based on AssignmentManager interface, and configure shark to use it by changing the following setting:
AssignmentManagerClassName=org.enhydra.shark.assignment.StandardAssignmentManager
Shark comes with three different implementations of this manager:
·         Standard - just returns the list of users passed as a parameter, or if there are no users in the list, it returns the user that created corresponding process.
·         History Related - if there are some special "Extended attributes" defined in XPDL for some activity definition, this implementation checks the assignment history (who has already executed activity with such definition, ...) to make a decission about assignments that should be created.
·         XPDL Straight Participant Mapping - it makes assignments for the user that has the same Id as XPDL performer of activity.
NOTE: if you do not set any implementation (you simply comment line above), shark will use the default procedure. Actually, standard implementation of assignment API is not very useful, it basically just returns the first valid options.
Setting user group implementation
Shark's standard and history related assignment managers can be configured to use some implementation of UserGroup API when determining which user(s) should get the assignment.
Shark comes with LDAP implementations of UserGroup API, and Admin application brings another DB based implementation. Admin's DB based implementation uses DB for retrieving information about organizational structure, and LDAP based implementation uses LDAP server for getting organizational information.
Here is a part of configuration file for setting UserGroup manager implementation for standard assignment manager:
StandardAssignmentManager.UserGroupManagerClassName=org.enhydra.shark.usergroup.DODSUserGroupManager
NOTE: shark can work without implementation of this API - if you do not want to use any implementation, simply comment line above.
Setting participant map persistence implementation
Shark's standard and history related assignment managers can be configured to use some implementation of ParticipantMapping API when determining which user(s) should get the assignment.
This API is to retrieve mapping information between XPDL participants and shark users. Shark Admin application comes with DODS based participant map persistence implementation.
You can provide your own implementation of participant map persistence API.
Here is a part of configuration file for setting ParticipantMapping manager implementation for standard assignment manager:
StandardAssignmentManager.ParticipantMapPersistenceManagerClassName=org.enhydra.shark.partmappersistence.DODSParticipantMappingMgr
NOTE: if you comment the lines above, shark will work without participant map persistence API implementation.
Setting Caching implementation
Shark comes with LRU based cache implementation for holding Process and Resource objects. By default, shark is configured to use this cache implementation, which can speed-up its use by the clients.
This is the section of configuration file that defines cache implementation, and its sizes:
#=============================================================================
# Default cache is LRU
#
#-----------------------------------------------------------------------------
# Cache defaults
#
CacheManagerClassName=org.enhydra.shark.caching.LRUCacheMgr
 
# Default LRU cache sizes (LRU implementation default is 100 for each cache)
#LRUProcessCache.Size=100
#LRUResourceCache.Size=100
NOTE: if you do not set any implementation (you simply comment line above), shark will not perform any caching.
Setting instance persistence implementation
The implementation of this API is used to store information about shark's processes, activities, ... into DB. Shark comes with DODS based instance persistence implementation. One can write its own implementation of this interface (maybe using Hibernate or EJB), and to configure shark to work with this implementation, he needs to edit the following section of configuration file:
#
# DODS instance persistent manager defaults
#
InstancePersistenceManagerClassName=org.enhydra.shark.instancepersistence.DODSPersistentManager
Shark can't work without instance persistence implementation.
NOTE: If one would like to implement other instance persistence implementation, he should also give its own implementation of SharkTransaction API.
Configuring DODS instance persistence implementation to delete processes when they finish
By default, DODS implementation of instance persistence interface does not delete finished processes, but they are left in DB. This behaviour can be changed by setting the following parameter to true:
# Determines if finished processes should be deleted from DB (DODS persistence
# manager default is false)
#DODSPersistentManager.deleteFinishedProcesses=false
Setting logging API implementation
Shark comes with a default logger implementation, implemented by the use of log4j. You can write your own implementation of Logging API, and set it by editing configuration file, and probably adding some additional entries in configuration file that will be read by your logger implementation. Here is a complete logger configuration for shark standard logger:
#
# Standard logging manager defaults
#
LoggingManagerClassName=org.enhydra.shark.logging.StandardLoggingManager
 
 
# Standard Logging manager is using log4j, and here is log4j configuration
#
# log4j.rootLogger=info, SharkExecution, Console
 
log4j.appender.Database=org.apache.log4j.RollingFileAppender
log4j.appender.Database.File=C:/users/sasaboy/Shark/output/Shark/logs/SharkPersistence.log
log4j.appender.Database.MaxFileSize=10MB
log4j.appender.Database.MaxBackupIndex=2
log4j.appender.Database.layout=org.apache.log4j.PatternLayout
log4j.appender.Database.layout.ConversionPattern=%d{ISO8601}: %m%n
 
log4j.appender.XMLOutFormatForPersistence=org.apache.log4j.FileAppender
log4j.appender.XMLOutFormatForPersistence.File=C:/users/sasaboy/Shark/output/Shark/logs/chainsaw-persistence.log
log4j.appender.XMLOutFormatForPersistence.append=false
log4j.appender.XMLOutFormatForPersistence.layout=org.apache.log4j.xml.XMLLayout
 
log4j.appender.PackageEvents=org.apache.log4j.RollingFileAppender
log4j.appender.PackageEvents.File=C:/users/sasaboy/Shark/output/Shark/logs/SharkPackageHandlingEvents.log
log4j.appender.PackageEvents.MaxFileSize=10MB
log4j.appender.PackageEvents.MaxBackupIndex=2
log4j.appender.PackageEvents.layout=org.apache.log4j.PatternLayout
log4j.appender.PackageEvents.layout.ConversionPattern=%d{ISO8601}: %m%n
 
log4j.appender.XMLOutFormatForPackageEvents=org.apache.log4j.FileAppender
log4j.appender.XMLOutFormatForPackageEvents.File=C:/users/sasaboy/Shark/output/Shark/logs/chainsaw-packageevents.log
log4j.appender.XMLOutFormatForPackageEvents.append=false
log4j.appender.XMLOutFormatForPackageEvents.layout=org.apache.log4j.xml.XMLLayout
 
log4j.appender.SharkExecution=org.apache.log4j.RollingFileAppender
log4j.appender.SharkExecution.File=C:/users/sasaboy/Shark/output/Shark/logs/SharkExecutionFlow.log
log4j.appender.SharkExecution.MaxFileSize=10MB
log4j.appender.SharkExecution.MaxBackupIndex=2
log4j.appender.SharkExecution.layout=org.apache.log4j.PatternLayout
log4j.appender.SharkExecution.layout.ConversionPattern=%d{ISO8601}: %m%n
 
log4j.appender.XMLOutFormatForExecution=org.apache.log4j.FileAppender
log4j.appender.XMLOutFormatForExecution.File=C:/users/sasaboy/Shark/output/Shark/logs/chainsaw-execution.log
log4j.appender.XMLOutFormatForExecution.append=false
log4j.appender.XMLOutFormatForExecution.layout=org.apache.log4j.xml.XMLLayout
 
log4j.appender.NTEventLog=org.apache.log4j.nt.NTEventLogAppender
log4j.appender.NTEventLog.source=SharkCORBA-Service
log4j.appender.NTEventLog.append=false
log4j.appender.NTEventLog.layout=org.apache.log4j.PatternLayout
log4j.appender.NTEventLog.layout.ConversionPattern="%d{ISO8601}: [%t], %p, %c: %m%n"
 
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.PatternLayout
log4j.appender.Console.layout.ConversionPattern=%d{ISO8601}: %m%n
 
log4j.logger.Persistence=INFO,Database
#log4j.logger.Persistence=INFO,Database,XMLOutFormatForPersistence
 
log4j.logger.PackageEventLogger=INFO,PackageEvents
#log4j.logger.PackageEventLogger=INFO,PackageEvents,XMLOutFormatForPackageEvents
 
log4j.logger.Shark=INFO,Console,SharkExecution
#log4j.logger.Shark=INFO,Console,SharkExecution,XMLOutFormatForExecution
The standard logger implementation is written in the way that it could log even if there are no log4j settings defined in configuration file (so the implementation can't configure log4j), but log4j is configured from client application using shark.
The following log outputs are generated by default:
·         Server execution flow log - logs every significant shark operation like package loading, process instantiation, activity completion, .... These logs are also displayed in the console during shark execution.
·         Package Handling Events - logs every operation performed with Package definition files (XPDL files). These operations are:
o        loading of the package from external repository into shark's memory
o        unloading of the package from the shark
o        updating of the package that is already in the shark's memory
·         Server persistence log - logs every operation related to communication among DODS instance persistence implementation, and underlying database.
You have the possibility to force Shark to make log files that can be viewed using log4j's chainsaw viewer. To do so, for each type of logger, you have to comment first and uncomment the second line that refers to the logger at the bottom of logger configuration.
Then, the output logs will be also generated into XML log files (chainsaw-execution.log, chainsaw-packageevents.log and chainsaw-persistence.log) that can be read by chainsaw.
The chainsaw can be started by using proper "chainsaw" script from the root of the project. When it is started, you have to open wanted log file by using its "File->Load file..." menu item, and it will present you the proper logs.
NOTE: If you do not want any logging, comment LoggingManagerClassName line above, and shark will not log anywhere.
Setting repository persistence implementation
This API is used to store information about XPDL definitions and versions. Shark comes with two implementations of this API: FileSystem based, and DODS based.
You can provide your own implementation of this API, and replace the current implementation. The default implementation is DODS implementation.
# Default repository persistent manager is DODS
#
 
#RepositoryPersistenceManagerClassName=org.enhydra.shark.repositorypersistence.FileSystemRepositoryPersistenceManager
 
# The location of xpdl repository.
# If you want to specify it by relative path, you must know that this path must
# be relative to the Shark.conf file (in conf folder)
FileSystemRepositoryPersistenceManager.XPDL_REPOSITORY=repository/internal
 
# The location of xpdl history repository.
# If you want to specify it by relative path, you must know that this path must
# be relative to the Shark.conf file (in conf folder)
FileSystemRepositoryPersistenceManager.XPDL_HISTORY_REPOSITORY=repository/internal/history
 
 
RepositoryPersistenceManagerClassName=org.enhydra.shark.repositorypersistence.DODSRepositoryPersistenceManager
 
# The database used for Repository persistence when using DODS implementaion
#DODSRepositoryPersistenceManager.DatabaseName=sharkdb
 
# If set to true, the debug information on repository transaction will be
# written to console
#DODSRepositoryPersistenceManager.debug=false
NOTE: Shark can't work without implementation of this API.
Setting scripting manager implementation
Shark comes with standard scripting manager implementation. This is a factory for returning appropriate script evaluator, and standard implementation offers three different script evaluators: Python, Java script and Bean shell.
#=============================================================================
# Default Scripting manager is Standard
#
#-----------------------------------------------------------------------------
#
ScriptingManagerClassName=org.enhydra.shark.scripting.StandardScriptingManager
Shark can't work without Scripting API implementation.
Setting security (authorization) API implementation
This API is not well defined, and currently serves just like an example. In the future, this API will contain methods to authorize shark usage on the level of particular methods (i.e. to create, abort, terminate or suspend some process, user will have to be authorized).
#=============================================================================
# Default Security manager is Standard
#
#-----------------------------------------------------------------------------
#
SecurityManagerClassName=org.enhydra.shark.security.StandardSecurityManager
NOTE: If you don't want any authorization, you just need to comment line above - shark can work without this API implementation.
Setting tool agents
Shark comes with standard ToolAgentFactory implementation, and with several example tool agents (JavaScript, BeanShell, RuntimeApplication, SOAP, Mail and JavaClass tool agent), and with default tool agent implementation.
To learn more about tool agent, you should look at ToolAgent documentation.
These are configuration settings for tool agents:
#=============================================================================
# Default Tool agent settings
#
#-----------------------------------------------------------------------------
#
ToolAgentFactoryClassName=org.enhydra.shark.toolagent.ToolAgentFactoryImpl
 
# The list of tool agents
ToolAgent.JavaClassToolAgent=org.enhydra.shark.toolagent.JavaClassToolAgent
ToolAgent.JavaScriptToolAgent=org.enhydra.shark.toolagent.JavaScriptToolAgent
ToolAgent.BshToolAgent=org.enhydra.shark.toolagent.BshToolAgent
ToolAgent.RuntimeApplicationToolAgent=org.enhydra.shark.toolagent.RuntimeApplicationToolAgent
ToolAgent.MailToolAgent=org.enhydra.shark.toolagent.MailToolAgent
ToolAgent.SOAPToolAgent=org.enhydra.shark.toolagent.SOAPToolAgent
 
# Default tool agent is used when there is no mappings for some
# XPDL application definition
DefaultToolAgent=org.enhydra.shark.toolagent.DefaultToolAgent
 
# Specifies the size of cache for holding ext. attributes (for shark performance reason)
# Default -1 means unlimited
#AbstractToolAgent.extAttribsCacheSize=-1
NOTE: shark can work without tool agent API implementation, but then it can only execute processes that do not contain any "Tool" activity.
Setting application map persistence implementation
This API is used to retrieve mapping information between XPDL applications and tool agent applications. Shark Admin comes with DODS based application map persistence implementation.
For a standard tool agent manager, you can specify which implementation of application map persistence API you want to use.
# Application map details for StandardToolAgentManager
StandardToolAgentManager.ApplicationMapPersistenceManagerClassName=org.enhydra.shark.appmappersistence.DODSApplicationMappingMgr
NOTE: shark can work without application map persistence API implementation.
Setting DODS Id generator cache size(s)
You can specify cache sizes for object Ids (activity and process Ids). When some process or activity is created, shark asks its data layer (default DODS layer) for unique Id. This Id generation is synchronized on DB, so that shark can be used from different VMs at a time. To tell shark not to go to the DB so often, you can specify an Id cache for objects:
#=============================================================================
# DODS Settings for Id Generator
#-----------------------------------------------------------------------------
# default cache size for Ids (if cache size for particular object Id is not
# specified, then this size is used, and if this cache size also isn't
# specified, program default is used)
DODS.defaults.IdGenerator.CacheSize=100
 
# cache size for process instance Ids
#DODS.IdGenerator._process_.CacheSize=100
 
# cache size for activity instance Ids
#DODS.IdGenerator._activity_.CacheSize=100
1.1. About data model关于数据模型
You can find here DODS generated documentation of various data models used in default shark configuration:
Instance persistence data model - (html, pdf)
Event audit data model - (html, pdf)
Repository persistence data model - (html, pdf)
Id Counter data model - (html,pdf)
Database support数据库支持
When using DODS as implementation of persistence APIs, shark can work with different databases - practically, any database supported by DODS can be used.
Here is the list of DODS supported databases:
·         DB2
·         Informix
·         HypersonicSQL
·         MSQL
·         MySQL
·         Oracle
·         PostgreSQL
·         Sybase
The default database coming with Shark distribution is HypersonicSQL, and we also tested it with DB2, MSQL, MySQL, Oracle and PostgreSQL.
 
 
 
posted on 2006-07-28 16:38  arm-linux  阅读(1936)  评论(0编辑  收藏  举报