Perforce Defect Tracking Integration Project
4. Installing the integration software
6. Migrating your defect tracking data to the integrated system
This is the Perforce Defect Tracking Integration version 0.4 Administrator's Guide. It explains how to install, configure, maintain, and administer the Perforce Defect Tracking Integration (P4DTI).
Warning: The Perforce Defect Tracking Integration version 0.4 is beta test software, only intended for use during the beta program of the project. The software will be defective. We recommend that you do not rely on the integration in your organization. The integration may destroy your data. See the project web site <http://www.ravenbrook.com/project/p4dti/> for information about planned releases. We are very interested in your feedback on the beta. Please write to p4dti-beta-feedback@ravenbrook.com.
This is outline documentation, still in development.
This document is intended for the P4DTI administrator at your organization. Ordinary users of the defect tracker or Perforce should not need to read it. See "Training and Documentation" in Section 8.
Although this guide will teach you how to administer the integration, it won't teach you the basics of actually using the integration, Perforce, or the defect tracker. You should also read the Perforce Defect Tracking Integration User's Guide RB 2000-08-10] in order to understand the integration from a user's perspective.
This section provides a summary of the process of installing, configuring, and running the integration, and also describes what the integration does and how it does it.
To install and run the P4DTI, you must
These steps are documented in this manual. We also suggest that you read Section 2.2 so that you understand what the P4DTI does.
The integration mainly works by replication. The replicator is a process that copies (replicates) data between a defect tracker and a Perforce server in order to keep each one up to date with changes made in the other. This allows developers to do their routine defect resolution work entirely from their Perforce client, without needing to use the defect tracker's interface. It also allows developers to relate their changes to defect tracking issues.
Figure 1 shows how the replicator communicates with the defect tracking server and the Perforce server.
The replicator maintains a one-to-one relationship between issues in the defect tracker's database and jobs in the Perforce repository. In other words, each issue has a corresponding job, and vice versa. The replicator keeps the contents of a configurable set of fields in the defect tracker's issues the same as the contents of the corresponding Perforce job, so that editing one edits the other. [This last sentence could use some rewriting. LMB 2000-11-29]
The replicator also copies Perforce's links between jobs and changelists (called "fixes") to the defect tracker's database, and makes them visible in the defect tracker's user interface. This makes it possible to track, record, and check a whole bunch of things. In particular, it makes it possible to track and record the changes made for each issue, and find out why a change was made in terms of issues.
[We could do with a diagram of the relationship between issues, jobs, and changelists here. RB 2000-11-27]
Most defect trackers have an idea of workflow -- a set of rules that control who can do what to which issues. [And control at what point in the process these things can be done, as well? LMB 2000-11-29] The replicator enforces the defect tracker's workflow. The replicator rejects changes to jobs in Perforce that would be illegal in the defect tracker. It undoes the change and sends an e-mail message to the user. [The last three sentences could use some rewriting. LMB 2000-11-29]
The replicator polls the defect tracking server and the Perforce server at regular intervals to get a list of recent changes, and attempts to propagate these changes to the other system. If both sides change at the same time, the replicator overwrites the Perforce job with the defect tracker record and sends an e-mail message containing the overwritten data to the people involved.
We recommend running the replicator on the same machine as the defect tracker's server, partly to keep all-defect-tracking related administration local to one machine, and partly because Perforce's network protocol is usually better than the defect tracker's. The rest of this manual assumes that you're doing this.
Figure 1. The replication architecture
This section explains what you will need before you can install the P4DTI. It is divided into subsections that explain the Perforce prerequisites and the defect tracker prerequisites, respectively. You need to meet both sets of prerequisites before you install the integration.
We recommend that administrators of the integration have at least six months of Perforce administration experience and a basic knowledge of Python. If you are unfamiliar with Python, you may want to read the Python tutorial at <http://www.python.org/doc/current/tut/tut.html> -- it's short. You will certainly need a good working knowledge of Perforce's command line interface, and should have read both the Perforce Command Line User's Guide [Perforce 2000-10-09] and the Perforce System Administrator's Guide [Perforce 2000-10-11].
You will need to be running a Perforce server of version 2000.2 or later. Server upgrades can be downloaded from the Perforce FTP server at <ftp://ftp.perforce.com/pub/perforce/>. Be sure to read the release notes (available from <http://www.perforce.com/>) before you install, and contact Perforce technical support if you need help.
[As of 2000-11-29, Perforce 2000.2 is in beta testing. Be sure to read the release notes before you attempt to use it, and back up your Perforce repository. RB 2000-11-29]
You will also need Perforce licenses for every defect tracking user who is going to work in Perforce, plus an extra "daemon license" for the replicator. A daemon license is a license for an automatic process, rather than a person, and Perforce's policy is to provide these free of charge. Contact Perforce technical support to get one.
You must ensure that your Perforce users upgrade their clients to version 2000.2 or later.
We recommend that you back up your Perforce repository before you install the P4DTI, to make sure you don't lose any data. See the Perforce System Administrator's Guide [Perforce 2000-10-11, Chapter 2].
You might want to practice installing and configuring the P4DTI using a test Perforce repository before you try it on your real one. A copy of your real repository would be ideal. See the Perforce System Administrator's Guide [Perforce 2000-10-11, Chapter 1].
The address and port number of your Perforce server will need to be entered into the replicator configuration; for details, see Section 5.
You can skip this section if you're not using the P4DTI with TeamTrack.
You will need to be running TeamTrack version 4.0.4 or later.
[As of 2000-10-29, TeamTrack 4.0.4 is not available, so a special pre-release build of TeamTrack, build 4402, must be used. An installer can be downloaded from <http://www.ravenbrook.com/project/p4dti/import/2000-10-13/teamtrack-4402/ttrk4402.exe>. Warning: This build of TeamTrack may "upgrade" your database in a non-reversible way when it is first run, making it incompatible with your current release. Be sure to back up your database if you want to go back to your existing release, and consider trying out the P4DTI on a copy of the database. RB 2000-11-29]
We recommend that you back up your TeamTrack database before you install the P4DTI, so that you don't lose any data. See the section named "Copying a Database" in tTrack 4.0 Administrator Manual [TeamShare 2000-05, page 67]. (You might want to practice installing and configuring the P4DTI using a test TeamTrack database before you try it on your real one. We've included a sample TeamTrack database with the integration; see Section 4.4 for information on how to use it.)
You will need TeamTrack licenses for every Perforce user who is going to work in TeamTrack, plus one extra for the P4DTI itself. To clarify, every Perforce user who will be assigned issues must also have a TeamTrack license, because only licensed TeamTrack users can legally own issues. The replicator also needs a license, but TeamShare will provide one free of charge for this purpose. Contact TeamShare support to get one.
You will also need Administrator-level access to the TeamTrack server machine, and approximately 5Mb of free disk space for the integration, plus space for logs. [Logs grow continuously at the moment. Do we also need to tell people about the extra space needed in the database? RB 2000-11-29]
The TeamTrack server machine will also need Python 2.0 for Windows. This is available from <http://www.ravenbrook.com/project/p4dti/import/2000-10-18/Python-2.0/BeOpen-Python-2_0.exe>.
You must also ensure that your TeamTrack users do not have TeamShare's SourceBridge plug-in installed. SourceBridge will prevent the P4DTI working properly, and is completely replaced by it.
You must ensure that the workflows defined in your TeamTrack database are compatible with the P4DTI. The P4DTI has to infer which TeamTrack transitions to apply when job the state is changed in Perforce, and it can be confused by some configurations. It might behave unexpectedly if you have
We also recommend that projects which share states also have the same transitions between them. The transitions aren't visible from Perforce, only the states, and users are likely to get confused.
You can skip this section if you're not using the P4DTI with Bugzilla.
You will need to be running Bugzilla 2.10, and be using it with MySQL. You can download Bugzilla 2.10 from <http://info.ravenbrook.com/project/p4dti/import/2000-05-09/bugzilla-2.10/bugzilla-2.10.tar.gz>. Installing Bugzilla is quite a lengthy process and may involve downloading and installing further prerequisites. Be sure to study the README
file that comes with it.
The P4DTI includes a patch file for Bugzilla 2.10 that will need to be applied to get all the functionality. If you've modified your Bugzilla code, the patch may still work, but we can't guarantee it.
The Bugzilla server machine needs Python 1.5.2 or later, and the MySQLdb Python package 0.2.2 or later. Python 1.5.2 is available from <ftp://ftp.python.org//pub/python/src/python-1.5.2.tar.gz> for sources or <ftp://ftp.python.org//pub/python/binaries-1.5/> for pre-made binaries and RPMs for some systems. MySQLdb 0.2.2. is available from <http://www.ravenbrook.com/project/p4dti/import/2000-08-09/python-mysqldb-0.2.2/MySQLdb-0.2. 2.tar.gz>.
You will need approximately 2Mb of free disk space on the Bugzilla server machine for the integration, plus space for logs. [Logs grow continuously at the moment. Do we also need to tell people about the extra space needed in the database? RB 2000-11-29]
[This section and the next need a lot of organizing. Much of what's in this section at the moment is really configuration, or else depends on configuration decisions. GDR 2000-10-19]
This section explains how to install the integration software. It is divided up into the following subsections:
Work through the subsections in the order in which they appear in this manual. [Can they stop halfway through and finish installing the integration later? Need to explain what they can and can't do while installing. LMB 2000-11-25]
Before you go any further, make sure you have already met all the prerequisites for both Perforce and the defect tracker. In particular, you must have installed all the prerequisite software. See Section 3 for a complete list of the prerequisites.
You will also need a copy of P4DTI product release version 0.4. Releases are available from the release page at <http://www.ravenbrook.com/project/p4dti/release/>. We recommend that you get the highest numbered release, check the release notes, and switch to the manuals that come with it, rather than continuing with this manual. There may be important bug fixes.
Before you install the integration, you need to decide which directory to install the integration in. We recommend C:\Program Files\P4DTI\
. You will use this information when you unpack the integration software in Section 4.2.
The integration software is distributed as a self-extracting executable named p4dti-teamtrack-RELEASE.exe
(where RELEASE is the release number, such as 0.4.1). Run this executable on the TeamTrack server machine. The installer unpacks the integration into C:\Program Files\P4DTI\
by default, but you can ask the installer to put it somewhere else.
Before you can use the integration, you need to create two new TeamTrack values in the Windows Registry. You can do this as follows:
regedt32
.HKEY_LOCAL_MACHINE\Software\TeamShare\TeamTrack
. VC Integration
, data type REG_SZ
, and string contents perforce
.VC Update Group
, data type REG_SZ
, and string contents Everyone
.Your Registry should now look like the one shown in Figure 2.
Figure 2. TeamTrack Registry keys
We recommend that you use a sample TeamTrack database when testing the integration. A sample database is distributed with the integration. To use this database, follow the instructions below. (See also the section named "Connecting to databases" [TeamShare 2000-05, page 58].)
Microsoft Access Driver (*.mdb)
from the list.C:\Program Files\P4DTI\tTrackSample.mdb
. Your dialog should now look like the one in Figure 3. P4DTI test
from the list (this is the
data source that you just created). Your dialog should now look like the one in Figure 4. If you are not using IIS to serve TeamTrack, you must stop and restart the TeamTrack server by following these steps. (If you are using IIS, you can skip this step and go on to the next section.)
Figure 3. Path to data source in the TeamTrack Administrator
Figure 4. Selected data source in the TeamTrack Administrator
Next, you need to create a user in TeamTrack to represent the replicator. We recommend that you call this user "P4DTI" (but see Section 5.5 if you are replicating to several Perforce servers). Later you'll need to configure the replicator to use the user you have chosen; see Section 5.
To create the user, follow these steps:
[To do. GDR 2000-11-29]
[I think the following job fix (job000056) should go in this section once it's written. I've included it here in case this section isn't done before the beta and the beta users need the information. LMB 2000-11-30]
[The replicator needs to be a Perforce super user to update the Perforce jobspec. You have to add the P4DTI user to the "p4 protect" table if you're using the table.]
This section describes the configuration decisions that each organization has to make and how to configure the integration to implement those decisions. It also contains example configurations and describes common pitfalls.
The integration's configuration will also need to be updated when the organization changes in various ways. See "Maintaining the configuration" [no section yet] for details.
The decisions that you (or someone else in your organization) need to make are:
[and lots more. (Moved from final list item. LMB 2000-11-25)]
[Section not written yet. RB 2000-09-20]
The integration is configured by editing definitions in Python. The
configuration for a defect tracker is defined in the file config_DEFECT_TRACKER.py
in the
installation directory, which is C:\Program Files\P4DTI\
by default. For
example, the configuration for the TeamTrack release is in the file
config_teamtrack.py
.
The majority of this file consists of variable definitions. Edit these definitions according to the descriptions in the next section. That's it! You should now be ready to run.
[How does the administrator check the configuration? RB 2000-09-21]
[There should follow sections on how to implement each of the configuration decisions. RB 2000-09-21]
[We must explain how to replicate just the cases belonging to a single project [RB 2000-11-20, section 8 item 9] RB 2000-11-20]
Replicator configuration parameters:
Name | Meaning |
---|---|
rid |
The replicator identifier. This is a token used to distinguish
between replicators in situations where multiple defect trackers are
being replicated to the same Perforce server, or a defect tracker is
being replicated to multiple Perforce servers. The replicator
identifier should be 32 characters or less, start with a letter or
underscore, and consist of letters, numbers and underscores only. For
example, "replicator0" . In the common
case where you have only one replicator, it doesn't matter what you
use for the replicator identifier; "replicator0" is a good choice since it allows
you to add more replicators later. |
sid |
The Perforce server identifier. This is a token used to
distinguish between Perforce servers in situations where a defect
tracker is being replicated to multiple Perforce servers. The
Perforce server identifier should be 32 characters or less, start with
a letter or underscore, and consist of letters, numbers and
underscores only. For example, "perforce0" . In the common case where you have
only one Perforce server, it doesn't matter what you use for the
Perforce server identifier; "perforce0" is
a good choice since it allows you to add replication to more Perforce
servers later. Alternatively, you could use the hostname of your
Perforce server, if this is stable. |
administrator_address |
The e-mail address of the administrator of the integration. The
replicator sends reports of conflicts in the data and other errors to
this address. We recommend that you set up an alias so that you can
change the person responsible for administrating the integration (or
redirect it temporarily) without having to change the configuration
and restart the replicator. For example, "p4dti-admin@company.com" . |
changelist_url |
A format string used to build the URL for change descriptions, or
None if there is no URL for change
descriptions. The string has one %d
format specifier for which the change number will be substituted.
Defect trackers that support this feature will list the changelists
that fix each issue, and make a link from each changelist to this URL,
with the change number substituted. For example, if you are using
perfbrowse then a suitable value would look like "http://info.company.com/cgi/perfbrowse.cgi?@describe+%d" .
[This needs a reference to the perfbrowse distribution and an example
of a suitable format string for p4web. Also a list of defect trackers
that support this feature (none as yet). Any other details needed?
GDR 2000-11-30] |
log_file |
The name of the replicator's log file. For example, "C:\\Program Files\\P4DTI\\p4dti.log" . |
p4_client_executable |
The path to the Perforce client executable that the replicator
uses. The replicator requires this client to be version
P4/NTX86/2000.1/17595 (2000/09/28) or later. For example, "C:\\Program Files\\Perforce\\p4.exe" . |
p4_password |
The password with which the replicator logs into Perforce, or the
empty string if there is no password. For example, "" . |
p4_port |
The address and port of the Perforce server that the replicator
replicates to and from. For example, "perforce.company.com:1666" . |
p4_server_description |
A human-readable description of the Perforce server that the
replicator replicates to. This may be used by the defect tracker to
indicate which Perforce server an issue is replicated to. For
example, "Hardware development group Perforce
server" . |
p4_user |
The userid with which the replicator logs into Perforce. For
example, "p4dti-replicator0" . This is the
Perforce user that you created in Section
4.6. We recommend that you incorporate the replicator identifier into this userid, so that
you can easily add more replicators later. |
replicator_address |
The e-mail address from which the replicator sends e-mail. For
example, "p4dti-replicator0@company.com" .
We recommend that the user part of this e-mail address be the same as
the replicator userid in Perforce, so that it's clear which replicator
is sending the e-mail if you have several replicators. We recommend
that this address be an alias for the administrator address so that
people can reply to the automatically generated e-mail and get to
someone who can deal with their problem. |
smtp_server |
The address of the SMTP server that the replicator uses when
sending e-mail. For example, "smtp.company.com" . |
TeamTrack configuration parameters:
Name | Meaning |
---|---|
replicated_fields |
A list of the database names of TeamTrack fields that will be
replicated between issues and jobs. The fields STATE, OWNER, and
TITLE are always replicated, so don't include those fields here. For
example,
Which fields should you replicate? Here's some advice. Remember that generally developers will be doing their defect resolution using Perforce. So any field that developers will need to look at to do their work should be replicated. For example, if you're using TeamTrack's sample database or something similar to it, you should replicate DESCRIPTION (so that developers can understand what the problem is), SEVERITY and PRIORITY (so that developers can prioritize their work). Any field that you require developers to fill out in the course of their work should be replicated. For example, if you're using TeamTrack's sample database, you may want to replicate the ESTTIMETOFIX and ACTTIMETOFIX fields. Other fields should probably not be replicated -- the simpler it is for developers, the better the quality of the information they enter will be. This may mean modifying your workflows so that these fields can be omitted. For example, you may have a FIX_DESCRIPTION field for the developer to explain what they did. Now that you're running the integration, you don't need that field, since you can look in the change comments of the associated changelists to find out what the developer did. So leave it out. If you need to change the list of replicated fields after you've been using the integration for a while, then you need to take care. See the section on maintenance (section 10) for details. |
teamtrack_server |
The TeamTrack server hostname and (optionally) port that the
replicator will replicate to. For example, "teamtrack.company.com:80" . |
teamtrack_password |
The password that the replicator uses to log into TeamTrack, or
the empty string if there is no password. For example, "" . |
teamtrack_user |
The user name that the replicator uses to log into TeamTrack. For
example, "P4DTI-replicator0" . This is the
user that you created in Section 4.5. |
[We should include a simple example which would suit organization new to defect tracking, or who have just migrated from Perforce jobs, and a more complex example, more suitable for TeamTrack users with non-trivial workflows. RB 2000-09-21]
[This topic will probably deserve special treatment, but if not, delete this section. RB 2000-09-21]
This section explains how to migrate your defect tracking data from your existing defect tracker to the integrated system.
[As of version 0.4, the P4DTI does not support migration from Perforce jobs. RB 2000-11-29]
[In fact, it is possible to configure the replicator to use existing jobs, and to use the migrate_teamtrack.py
script to force them into TeamTrack, but it's complicated and messy and not very well developed. We may develop it further if there's demand during the beta program, but so far we've come across very few sites that use Perforce jobs. See also "Advanced configuration" in Appendix D. RB 2000-11-29]
No special action is required to migrate defect tracking data from TeamTrack to the integrated system.
The replicator starts replicating TeamTrack cases as soon as it starts up. Only cases that are created or modified after the replicator is first started are replicated to Perforce.
[As of version 0.4 there is no easy way to start the replication of older cases. We intend to provide one. You can cause the replicator to replicate an old case by making an empty change to it from TeamTrack -- just click "Update" and "Update" again in the case form. RB 2000-11-29]
No special action is required to migrate defect tracking data from Bugzilla to the integrated system.
The replicator starts replicating Bugzilla bugs as soon as it starts up. Only bugs that are created or modified after the replicator is first started are replicated to Perforce.
[As of version 0.4 there is no easy way to start the replication of older bugs. We intend to provide one. You can cause the replicator to replicate an old bug by making an empty change to it from Bugzilla -- just click "Commit" in the bug form. RB 2000-11-29]
This section describes how you can test the integration setup to make sure it's working properly.
Note: The replicator does not currently start automatically. You must start it yourself by following these steps:
C:\Program Files\P4DTI\
by default; see above [See above where? Need to provide a reference point. I'm not convinced this XREF is useful anyway. LMB 2000-11-25]).python run_teamtrack.py
.[What should you expect to happen? GDR 2000-10-19]
To stop the replicator, press Control-C and wait for the replicator to next poll (10 seconds by default).
[A good approach to take in this section would be to create a test issue and take it through a complete life-cycle (i.e. through the workflow). We can't know what the workflow will be at the organization, but we can use an example. RB 2000-10-07]
Run the consistency checker by following these steps:
C:\Program Files\P4DTI\
by default; see above [See above where? Need to provide a reference point. I'm not convinced this XREF is useful anyway. LMB 2000-11-25]).python check_teamtrack.py
.[What should you expect to happen? GDR 2000-10-19]
Examine the database by hand using a database application -- for example, Microsoft Access -- to make sure it looks like the Perforce data is in there.
This section explains how to test the integration as if you were a user of Perforce. You should go through a typical sequence of actions that you expect your developers to go through -- a dry run, as it were.
[It's probably not necessary to test the integration from every Perforce interface, but we should describe how to do it for each of them in the manual. Advise the administrator to use whichever their developers are most likely to use.]
The main interfaces are:
[Section not written yet. RB 2000-09-20]
This section explains how to test the integration as if you were a user of the DT.
[Probably need sub-sections for each DT.]
[Section not written yet. RB 2000-09-20]
[We're going to provide the UG which will be "how to" documentation for the use cases and requirements. But this section will recommend getting the users together and telling them about the integration and how to use it, in overview. LMB suggests we provide a presentation outline. GDR suggests that we cover some key stuff that users should or shouldn't do (e.g. "don't edit the P4DTI-* fields in jobs"). RB will write this presentation, as it's needed for the alpha programme anyway. We need to cover how to use it, and what to do when it goes wrong, for example, when a conflict happens, or when they come across a conflicting issue. This presentation material will also be useful for the user guide. RB 2000-10-07]
[It might be a good idea to tell everyone to upgrade their Perforce clients to version 2000.2 or later, and to make sure that they uninstall TeamShare SourceBridge. (See section 3.1 and section 3.2.) RB 2000-11-30]
[Section not written yet. RB 2000-09-20]
[When the admin is configuring and testing, he doesn't want to use the real database. We should explain this earlier on. And we'll need to explain how to make the integration work on the real data smoothly. We can provide a plan, and contingency plans for if things go wrong. For example, recommend coming in early in the morning or over a weekend. We can estimate how long it will take.
[Configuration should've taken place on a duplicate of their real data. This might not be feasible if the real data is a 10Gb Perforce repository and a 100Gb corporate Oracle database. We'll need to think about that one.
[They'll need to re-run the testing on the live system as well. (The earlier testing was to make sure the configuration worked, this is to make sure the live system is working.)
[They should configure the replicator to start automatically when the system comes up, and check that it does.
[They should then watch the system working for a while. We should list things to look out for. RB 2000-10-07]
[Section not written yet. RB 2000-11-30]
[Section not written yet. RB 2000-11-30]
The TeamShare API used by the replicator to communicate with the TeamTrack server does not report the reasons for errors. This means that the P4DTI cannot tell why it can't replicate a change to a job from Perforce to TeamTrack. In general, the P4DTI assumes that it's because the user doens't have permission to modify the corresponding case, or because they've made a mistake or omitted information in filling out the job form. But sometimes there's a real error. If a user can't get a change through and you don't know why, check the Windows Application Log on the TeamTrack server machine using the Event Viewer. The Event Log usually contains more information about why things are going wrong. Note that TeamTrack doesn't report permission violations to the Event Log.
If you need to change the list of replicated fields (see section 5.3) after you've been using the integration for a while, then you need to take care. Perforce uses the field number in the jobspec to find data, not the field name. See the Perforce System Administrator's Guide [Perforce 2000-10-11, Chapter 5]. If you change the list of replicated fields, then the fields numbers will change, which means that the job data will be in a mess. We recommend that you reinitialize replication whenever you change the list of replicated fields. See ???. [At the moment there's no way to reinitialize the replicator: but we plan to provide such a way; see job000050. So there will be a section to refer to soon, I hope. GDR 2000-11-30]
[Section not written yet. RB 2000-11-30]
[This section was originally intended to give instructions to someone called the "resolver", who would deal with cases where the replicator needed human intervention and management decisions to keep going. However, we've decided to implement a policy of "defect tracker wins", so that in these situations the defect tracker issue overrides the Perforce job, with e-mail sent to all parties involved so that they are informed and no information is lost. We expect these situations to be quite rare. This decision reduces installation time and organizational impact, and we believe it will improve the integration. However, we've left the code in, and this section in the manual, in case the beta program shows that we were wrong. RB 2000-11-28]
First tell your staff that you're uninstalling the integration. Ask them to stop using either Perforce jobs or the defect tracking, whichever you're not planning to use in future. Then simply stop the replicator process. Remove any hooks that start it again, such as Windows services, entries in /etc/rc.d
, and so on. That's all you need to do.
The rest of this section deals with ways to remove data created by the integration, if you want to do that. We don't recommend it -- we recommend that you keep all of your records.
If you intend to use Perforce jobs in future, you might consider removing the P4DTI fields from the Perforce jobspec, so that they don't clutter up future jobs or confuse users. We suggest you don't re-use the field numbers, though, so that you can easily re-install the integration later.
[We should talk about how to delete the replicator installation itself, most likely by removing the contents of a directory. We should talk about removing just one replicator when there are several running. RB 2000-10-15]
Problem: A replication failed and a conflict was reported. I went into Perforce and set the action to "keep". But the same conflict happened again.
Analysis: When you set the action to "keep", the replicator tries to replicate again. If it failed before because the data does not pass the consistency checks, it will most probably fail again now for the same reason.
Solution: Figure out what went wrong, fix the problem in the defect tracker or in Perforce, and only then set the action to "keep" or "discard".
2000-08-10 | RB | Created placeholder. |
2000-09-11 | GDR | Added instructions for demonstrating the integration and notes on version 0.2. |
2000-09-20 | RB | Replaced demo instructions with full documentation outline from documentation plan. |
2000-10-15 | RB | Added installation and uninstallation sections, and other sections discussed in [RB 2000-10-07]. Removed parts specific to Ravenbrook Information System. |
2000-10-16 | RB | Merged with master sources and GDR's demonstration instructions for version 0.2. More edits required to make this consistent with the master sources. |
2000-10-19 | GDR | Updated to fix defects in release 0.3.1 [GDR 2000-10-17a] and release 0.3.2 [RB 2000-10-18b]. |
2000-11-25 | LMB | Removed "system" from title. Made lots of minor formatting and transition edits. Moved Glossary to end of document. Reorganized Section 4. |
2000-11-26 | RB | Impoved prerequisites section. Added draft Bugzilla prerequisites. Formatted troubleshooting section. Updated version 0.3 references to version 0.4. |
2000-11-27 | RB | Added readership. Removed some false statements. |
2000-11-29 | GDR | Revised section 5 (configuration) to explain how to use the automatic configuration engine for TeamTrack. Moved material from sections 4 and 5 to make an appendix E for advanced configuration. Added section 4.6, a placeholder that will describe how to create a Perforce user for the replicator. The integration with TeamTrack now requires Python 2.0. |
2000-11-29 | RB | Corrected overview and improved replicator diagram. Changed prerequisites to point at Perforce 2000.2 beta release. Added proper text to Bugzilla prerequisites section. Cross-referenced to User's Guide. |
2000-11-29 | LMB | Changed "—" to "--" because the former doesn't display properly in Netscape. Made some minor edits in Sections 1-3. |
2000-11-30 | LMB | Corrected figure numbers in the text that were off by one. Finished editing the AG. Swapped round Sections D and E. Searched the doc for "dfn" tags and incorporated those terms into the glossary. Deleted the list in Section 4.1 and folded its single entry into the preceding sentence. Added a short note to Section 4.4 to the effect that if you're using IIS, you don't need to stop and restart the TeamTrack server. Added a note to Section 4.6 that we need to tell admins to make the P4DTI user a Perforce super user and add it to the "p4 protect" table if they're using it. |
2000-11-30 | RB | Added instructions to upgrade users Perforce clients and to stop using TeamShare SourceBridge. Told the administrator to check the Windows event log when things go wrong, because the TeamShare API doesn't tell the replicator about errors. |
2000-11-30 | GDR | Added comments to the example jobspec in section D.2, and fixed
the formatting. Added note saying that you may not have a field
called "code". Listed the TeamTrack workflows that won't work well.
Wrote advice on how to configure the integration. Added the changelist_url configuration parameter. |
This section describes the way in which the integration represents the Perforce data in the defect tracking system's database (DTDB), so that organizations can write queries on the combined Perforce and defect tracking data [requirement 68].
Details of the schema extensions for TeamTrack aren't yet in the manual. Full documentation is available in the design document "TeamTrack database schema extensions for integration with Perforce (version 0.4)" [GDR 2000-09-04].
[Section not written yet. RB 2000-09-20]
Warning: The configuration methods in this section are not supported by Perforce or TeamShare.
[This section roughly explains how to configure the integration manually, i.e. by setting up your own Perforce jobspec and telling the replicator how the individual fields correspond and which translators to apply, and so on. We don't intend to support this method. We discovered during the alpha programme that it was much too complicated and difficult to understand given our installation time requirement (requirement 63). We developed a much simpler configuration method which automatically derived the jobspec and translations from the defect tracker configuration. This section might be useful if you have existing jobs and don't want your jobspec changed by the integration. If we don't get any demand for this during the beta program we'll probably remove it. RB 2000-11-29.]
You need to update the Perforce jobspec to add the fields required by the integration. The fields you need to add are described in this section. See also Chapter 5, "Customizing Perforce: Job Specifications", in the Perforce user's guide [Perforce 2000-10-11, Chapter 5].
p4 -p 127.0.0.1:1667 jobspec
(use the address and port for the Perforce server you're using for the integration).Add the following lines to the fields in the Perforce jobspec. It's not essential that the field numbers be as shown, but we recommend that you keep them the same if possible.
Fields:
190 P4DTI-filespecs text 0 default
191 P4DTI-action select 32 required
192 P4DTI-rid word 32 required
193 P4DTI-issue-id word 0 required
194 P4DTI-user word 32 always
Values:
P4DTI-action: keep/discard/wait/replicate
Status: see below
Presets:
P4DTI-rid: None
P4DTI-issue-id: None
P4DTI-user: $user
P4DTI-action: replicate
Comments:
# P4DTI-rid: P4DTI replicator identifier. Do not edit!
# P4DTI-issue-id: TeamTrack issue database identifier. Do not edit!
# P4DTI-user: Last user to edit this job. You can't edit this!
# P4DTI-action: Replicator action. See section 11 of the P4DTI administrator guide.
The Status entry in the Values field should list the states that can be replicated from the defect tracker -- for example, "open/closed/assigned/deferred/verified".
You can't have a field called "code" in the Perforce jobspec if
you're using the integration. This is because Perforce uses the
"code" field to pass information about the success or failure of the
p4 job -o jobname
command.
The configuration module config_DEFECT_TRACKER.py
builds two objects, dt
and r
, that are then used by the Python programs that run the replicator and check the consistency of the database. The dt
object is the interface to the defect tracker, and the r
object is the replicator itself. Here's an example:
dt = dt_teamtrack.dt_teamtrack(rid, sid, teamtrack_config)
r = replicator.replicator(rid, dt, replicator_config)
rid
is the replicator identifier, sid
is the Perforce server identifier, and teamtrack_config
and replicator_config
are Python dictionaries mapping name to value for each configuration parameter.
DT | Defect Tracker, Defect Tracking System | Examples are TeamShare's TeamTrack, Soffront TRACK Defects, Bugzilla |
DTDB | Defect Tracking DataBase, Defect Tracking DataBase system | Most defect trackers store the defect tracking information in a database of some sort. Some have abstract interfaces that allow them to use any ODBC compliant database (for example, TeamTrack can use Oracle as its DTDB). Some are closely coupled to a particular database (Bugzilla uses MySQL as its DTDB). |
issue | [Need definition. LMB 2000-11-30] | |
job | [Need definition. LMB 2000-11-30] | |
P4 | The Perforce SCM Software | Perforce Software's fast configuration management system software. We do not use the term P4 to refer to Perforce Software, the company. |
P4DTI | Perforce Defect Tracking Integration | The software that integrates Perforce Software's fast configuration management system (P4) to defect tracking systems (DTs). |
replication | [Need a definition here. LMB 2000-11-25]. | |
replicator | [Need a definition here. LMB 2000-11-30]. | |
RID | Replicator Identifier | [Need a definition here. LMB 2000-11-25]. |
workflow | [Need a definition here. LMB 2000-11-30]. |