General MDM
What platforms are supported with SAP Master Data Management?
Please find availability and supported platform information on SAP Service Marketplace, alias PAM (http://service.sap.com/pam). And then drill down into NetWeaver -> SAP MDM -> SAP MDM 5.5. Note that appropriate Service Marketplace authorization is required.
How integrated is SAP NetWeaver MDM 5.5 with SAP NetWeaver and applications?
SAP NetWeaver MDM 5.5 is an integral part of the NetWeaver stack. In the current feature release, enterprise application integration, both SAP and non-SAP, is accomplished through SAP XI. Interoperability with other systems is possible via SAP NetWeaver MDM 5.5’s APIs (including ABAP whose development is currently in process). Tight, native integration is part of the SAP NetWeaver MDM 5.5 roadmap and further pre-built integration points will be rolled out as we progress along the development path. SAP MDM 5.5 SP2 will provide view-only iViews for SAP Enterprise Portal.
Is the Product Catalog Management application part of the SAP NetWeaver Integration and Application Platform? Does print publishing belong to this platform as well?
Yes, these are all part of the SAP NetWeaver platform and print publishing is an extension of the capability to product content management. By definition, this is the case since the former A2i xCat application, now further augmented and known as SAP NetWeaver MDM 5.5, is part of the SAP NetWeaver MDM family of products.
How will MDM fit into Enterprise Services Architecture? Which Web services will be provided and when?
MDM is integral to SAP’s ESA strategy. The initial list of documented Web services with MDM 3.0 were provided with MDM 3.0 information release. These refer to the ability to access master data information in MDM as a service to create records, etc. New web services will be available as per the roadmap. With SAP MDM 5.5 in conjunction with SAP Exchange Infrastructure, one can create web services by exposing MDM functions using MDM JAVA or .NET APIs.
What tools are available to integrate SAP MDM and other non-SAP applications and platforms?
SAP MDM 5.5 exposes its core functions using published JAVA and .NET APIs. Any integration between MDM and other non-SAP software can be handled using APIs. Also, MDM functions can be exposed as web services using APIs in conjunction with SAP Exchange Infrastructure. Broader integration between SAP MDM 5.5 and other SAP NetWeaver components will be available through product roadmap.
Can Mask functionality be used for determining which BP records exist in R/3?
There is no need for a mask to be generated, as Syndicator can filter records to be sent according to the Agency and remote key stored within MDM. The “suppress records without key” option needs to be set to “Yes”.
Can a mask be recreated automatically from a saved search selection criteria?
This is not currently supported. Records can be hidden per role using “constraints” functionality in the console.
Can MDM send only changed fields and data and not the whole record?
There are two possible answers to this.
1. If you are extracting changed data through the API, you can set the timestamp field to change only when your key fields change. This will allow you to select only those records whose changes need to be sent to R/3.
2. Using the Syndicator you can use the timestamp technology in calculated fields or set up the relevant search criteria in the Syndicator to select only those records that have relevant changes.
What options are available for resending from MDM within XI or R/3 in case an update fails?
If the failure lies with XI or R/3, the same XML can be reprocessed (no resending is required). If there is a validation or data problem, the records needs to be identified and modified in MDM Data Manager Client and the Syndicator batch will resend them as they were updated since the last syndication.
How easy is it to maintain the front-end when the data model changes?
The effort depends on the number of fields required for the front-end. Fields that are added have no impact. Fields that are deleted (and maintained in the front-end), need to be removed. Fields that are renamed need to be updated.
Is it possible to develop web forms (outside of EP6) that link to standard Java MDM APIs and communicate with the MDM repository?
Yes it is possible as you are not limited to the use of iViews that exist. Your own application-specific iViews can be created. You can also access the server with direct calls to the API from the java environment.
Is it possible to assign the saved search criteria to a role or person to restrict what he or she can view in the search?
The saved search option is client computer specific. That means that a user’s search criteria are available only to the user and not to other users. Therefore the saved search is not an option in this case. Using role constraints you may achieve the required results.
Are adapters/extensions available in MDM for integrating monitoring tools? (ie. does Tivoli register if an exception occurs in MDM?)
MDM currently does not trigger external processes on errors. The system uses logging capabilities to register errors and there are specific log files for the various components of the system. If the monitoring system/s can be triggered on changes to the log files then the system can be monitored.
Is it possible to hide certain fields and their values (depending on profile)?
The MDM security mechanism allows you to define constraints to be used for display and hide field values in MDM Data Manager Client. Currently the MDM capabilities do not allow you to entirely hide fields upon a constraint setting. However, you can use the APIs for building a User Interface to allow display/hide of fields and attributes as required.
Is it possible to trigger external processes depending on type of errors raised, for example alert management functionality?
Currently an extended error handling with follow up processing is not on the roadmap. However, the usage of MDM Expression Language needs to be evaluated for this usage.
MDM stores change history in a separate database which can track the selected fields in any table, the before and after state of a record for that field and the user performing the change. As a result, if you activate too many fields or have frequent updates to the same field, you experience performance problems. How can I better manage this?
Limit the number of fields to be tracked to the minimum required. Establish an archive and purge procedure on the track changes log/database on daily basis to keep this database size to minimum, ensuring optimal performance.
User Interface - Client and Web Front End
Are saved searches shared between users or roles?
The saved searches (produced from the top menu Search->Save current search) in the client or the syndicator in the current version are saved locally per repository. That means they’re shared among different users working on the same workstation. Although it may seem a limitation such an approach makes the saved searches more flexible. The saved searches (as files) can be distributed over different workstations working with the same catalog and/or accessed from a share.
Can a saved search be shared between the client and the syndicator?
Searches are saved locally to a file and hence can be shared between the client and the syndicator by copying files (having extension sqf) to the syndicator or the client directories.
Do file-shared searches break security restrictions
Searches are merely sets of query’s criteria. In other words, every user that opens a saved search will get as results only records she is allowed to see.
There are too many search tabs in the Client’s "Search parameters" pane. How can I pick up only ones I want to display?
In the Console, every field has parameter “Display Field” which accepts Yes/No values. In the Client’s “Search parameters” pane only fields with “Display Field” option “Yes” are shown. Another way to hide a field is in the Client. There make a right click on the search tab and choose “Hide”.
MDM Server, Console and Repository
What are the relationships between Consoles Servers Repositories and Databases?
Once an archive is deployed in a database it becomes a repository. So a repository exists in one database. A repository may be mounted on many servers. However, it can be loaded only on one server at a time. One server may be accessed (mounted) from many consoles. The server’s status is updated on all consoles where it’s loaded.
Can two servers run on the same computer?
The current version (MDM 5.5. SP1) doesn’t allow you to run two instances on one computer.
Why in the Console’s security tabs do Constraints appear only against certain tables?
If a lookup table is referenced by only single-valued fields of the main table or other lookup tables, then its value is available in the Constraints fields. If the table is referenced either by a multi-valued lookup field or by a qualified lookup table’s field, its value is not available in the Constraint field.
Are Text Blocs and Text multilingual?
Text Blocs as well as PDF and Images are always multilingual. The Text and Large Text data type objects optionally may be defined as multilingual in the Console.
What is NULL and how is it used in MDM?
NULL is a special data marker. It denotes missing or not populated yet data. It can’t be treated as an existing value and doesn’t participate in equations of uniqueness integrity. That is, multiple records are permitted to have NULL for a unique field. To prevent records from having these undefined records, use validation functions IS_NULL and IS_NOT_NULL. Also, nulls can be handled through import manager.
What does “Required” stand for?
The Required field parameter is not a mandatory but an advisory property. In other words, it is different from “NOT NULL” definition of the RDBMS world. Its purpose is to let the creation of validation rules treat all “Required” fields at once. Such validation expression is not available yet but will be exposed in coming versions. The advantage of this approach is when your validation logic is changed and you decide the field should or should not participate in the validation you simply change the value of “Required”. Otherwise every validation expression should be changed manually.
Can LDAP property "MDMERoles" be changed to something else?
Yes, the name of an LDAP field listing MDME role is defined in the mds.ini file in field "MDME Roles Attributes" (see MDME Console guide p.255). MDMERoles is just a predefined name.
Can an attribute have additional parameters associated with it like status or userID timestamp?
Attributes can’t have any additional parameter except type, name, alias, description, and set of predefined values (for text type). For other purposes, you should probably use a lookup table.
Import Manager
Do validations work during the import process?
Validation rules don’t work during importing through the Import Manager. To execute a validation (or a group of validations) over newly-imported records you may use the API. In such a case a special flag during importing can arise and then the validations can be executed against an easily found record set.
Can I import my data in several steps?
Yes, you may have different maps for populating the same records. These maps may bring data to different or even partially overlapping fields. The only thing to take into account is to match records correctly and to choose the right import action.
Does the Import Manger have to import all records of the source or can I skip importing of some of them?
The Import Manger deals with source records on an ‘all or nothing’ basis in terms of mapping. Every record must be mapped to let the Import Manager start importing. However, what you can do is map undesirable records to a NULL or to a flagged record to make it easily recognizable for future cleaning. Another option to consider is to Skip some of the mapped records for actual mapping. The Skip action may be applied to a group of records (grouped according to their Match Class) or individually by changing their default inherited import action.
Can data be imported from two different sources concurrently?
The Import Manger works with only one source at a time. On the other hand, several tables of the same database connection can be joined on-line in the Import Manager. For different XML files, all pre-upload data operations should be performed before the Import Manager gets it. In case of XML files, XSLT transformation can be used effectively.
When I import a qualified lookup table it duplicates records. What’s the right way to avoid it?
When you right click on the field of the main table that points to the qualified lookup table, choose “set qualified update”. Then three options of how to treat repeating values appear. Option “Append” will add new records. Option “Replace” will replace it. And option “Update” will update them. In all three cases an additional option appears where you can specify how to match existing records through available qualifiers.
The Import Manager treats an XML file containing only one element and an XML file containing a collection of the same elements differently. How can I use the same map for both cases?
You should create an XML schema (XSD file) and put it in the catalog (Console->Admin->XML Schema). Then such schemas are available as source types in the Import Manager. When you create a map, the corresponding XSD file is saved with map allowing for reuse for future imports.
What is the difference between Update and Replace in the Import Manager?
The Replace erases the matching existing destination record and creates new ones based on the new source record. As a consequence, all relationships with other fields in case of Replace are broken. The Update really updates the fields leaving all existing relationships in place (also see p.296 of the Import Manager manual)
Is there a way to have a field displayed (in a lookup combo-box) but not participating in matching destination fields in the import process?
If a table has “key mapping” set to “Yes” then “Remote Key” appears in the destination fields list of the Import Manger. You may choose an existing field in the source fields tab or create a new one and map it to the remote key. Mapped “Remote Key” is sufficient to perform actual import and hence display fields may or may not participate in the matching.
Can I run two Import Managers at the same time?
Yes, several instances of the Import Manager can be open concurrently. The only existing limitation though is when actual importing process starts the tables involved in the importing process are locked for write access. So if two instances of the Import Manager import to different tables then no synchronization issue occurs. If they use the same table, then the instance getting access second has to wait until the first ends.
Where are import maps stored and can maps be edited externally?
The maps are stored in the repository and can be edited only though the Import Manager. They can be exported and imported to/from binary files. The maps are being archived and un-archived together with other repository data.
Can a source field partake in mapping twice?
Yes, to achieve it you have to clone the field in the Import Manager. You have to take into account though that cloned fields can’t start Partitioning string.
What is the most efficient format for source data when using Import Manager?
The efficient formats for massive upload are Microsoft Excel and Access, with no differences in performance. As Excel has limitations related to maximum number of records a single spreadsheet can hold, Access is the recommended option.
Is it possible to handle exceptions during import automatically (continue with import if 1 record fails, stop processing for a specific error but continue for another, or notify specific user if record update fails)? How are failed records reprocessed both in MDM and in the Business?
The import process generates log files containing the products and the reasons why these products failed to import. As in the previous process, these files can be collected and placed into a user’s process or email inbox. The user would then have the responsibility to fix the errors and re-import the data that was rejected by the batch process.
An archiving process needs to be in place to remove and store old files that have already been processed. How can I handle files that are partially processed?
The organization of the file import should be handled by the program calling the Batch import manager. This could be using the XI BPM or could be a simple program which checks the input files and invokes the import manager accordingly. Both scenarios are relatively easy to achieve as is the process of archiving and removing those files that were processed.
Syndicator
How do I handle a situation where the Syndicator produces one XML file with many elements or many XML files with one?
If you map the table name to the top element of your XML collection, then you’ll get one output file. Otherwise every element will be output in its own file. The best way to deal with this is to create an XML schema, upload it to the server (in the Console), and then use this schema for mapping.
Can repository data be syndicated as it is or is special output processing possible?
There is a "Custom Item" pane where a custom field may be created and then mapped as a regular source field.