CCAT: The Robust Beauty of CCAT Methodology
by Peter J. Favaro, Ph.D.
The Problem

Often what we see in the methodologies of custody evaluations is that evaluators talk with interviewees, develop conclusions about what they say, and then combine these, (often rather half-heatedly) with other techniques. Talking to people in this way is often called "unstructured interviewing."

A smart litigator will ask experts a series of questions about how accurate this methodology is, and the fact is, it is very inaccurate, especially outside of a clinical/treatment venue. The technical name for interview data is "self report," and it is the amongst the most unreliable information about "the person" we can collect. It is the wrong tool for the job custody evaluators do, but lawyers generally don't know enough to take them to task on it.

Why Self Report is Often Bad Data

Self report data can be influenced by the skill the interviewee has in expressing him or herself. Some people might be great parents, but not so good at talking about themselves. Or, they might be very overwhelmed by the process of having a complete stranger help determine a major element in their future as a parent. Interview data can influenced by fear of embarrassment, a desire to hide what might not be so desirable about them and many other factors. In clinical settings, the client and the doctor have lots of time to develop a trusting relationship with one another where, if proper rapport is established, the type and accuracy of self disclosure becomes more and more rich with good information. But in custody evaluations this luxury does not exist, and even with dozens of hours of interviewing, a client might not feel comfortable disclosing information that might seem to the client (and the doctor) to be adverse to their litigation goals.

Even if a client would give themselves over to an accurate process of self disclosure, what would be the chances that you could get TWO people to do the same, so that you can compare what they have to say about one another?

Then, there is the issue of credibility. Disclosure doesn't mean credibility and since mental health experts in general are not trained in credibility assessment, and since the research data show that mental health professionals are notoriously poor at assessing the difference between the truth and a lie, pages of unstructured interview data do not represent ideal information upon which to derive expert opinion.

Nor does unstructured interview data necessarily cover content areas that might be helpful to a judge in determining custody. For instance, does it really matter if someone had a "bad family life" (a conclusion often derived from interviews with an evaluator) when science cannot explain how two very bad parents can produce a wonderful child, and two wonderful parents can produce a horrible child? A happy childhood home life does not necessarily create a wonderful custodial parent with great mental health. An unhappy childhood home life does not produce a terrible custodial parent, or an emotionally disturbed individual. Fact is, there are too many variables, none of which we know enough about to use to predict custody. Aside from there being too many variables, that which might be important in the exploratory processes of clinical interviewing might be totally irrelevant to the issue of custodial fitness.

Another problem is that after unstructured interview data is gathered, in most cases it is selectively reported. It could never be completely reported because that would require a transcript of everything everyone said during the evaluation interviews. Instead, the evaluator reports what he or she thinks is important. That would be fine if the evaluator's scope was limited to him or herself -- but it's not. Perhaps the judge or the attorneys, or the litigant him or herself thought something that wasn't mentioned in the selective material produced in the report was important? And whose to say it might not be?

The Importance of "Data Transparency"

One of the things that makes it impossible to really examine an expert is the lack of transparency between the data and the resulting opinion and testimony. But, if experts acted like scientists (as opposed to clinicians) and relied on their communication skills to help the judge and others understand the benefits of scientific approaches to data gathering, their opinions might be more useful to the Court.

The Custody Conflict Analysis Tool (CCAT) takes one such transparent and scientific approach. The CCAT takes common themes and complaints that parents have during custody struggles and poses structured and specific questions about them to each litigant. Responses can be directly compared, grouped, and even become the target of specific statistical analyses. The questions are published with the answers and basic assumptions based on an artificial intelligence paradigm known as an "expert system", a set of rules which are compared to the answers. The questions and answers are produced in a report. They are published for anyone to see, and they can be interpreted with whatever significance the trier of fact or the examining attorneys see fit. The expert can be examined about how or why answers became important in the expert analysis.

Performing work like this is what I went to school for. Well trained psychologists should act like scientists, not talk show hosts who interview for the purpose of uncovering pieces of personal life that are "interesting."

Secondarily, CCAT questions and answers are systematically archived, so that future research can determine if certain patterns of responding can predict anything, something concerned scientists have been waiting for a long time. For now, to address anyone who might criticize the CCAT for not offering predictions, no data available is able to do this, so it is neither better nor worse than anything else in terms of prediction. The CCAT is a "descriptive tool," and descriptions of attitudes and beliefs, in my opinion, are very helpful assessment tools.

When people see the CCAT, often their first question is, "Well can't people give answers I'm certain ways just to make themselves look good?" My answer is whether they answer to make themselves "look good" is not the issue. The issue is that there is a history of facts to the case that become the subject of testimony and evidence presentation at trial. So, the important question is "compared to their own self report, does the testimony and evidence indicate: ARE they THAT GOOD?"

Any person who answers the CCAT with responses that are virtuous should be able to show through the facts that their past behavior is reflective of this. The CCAT then becomes a very useful tool for cross examination, fact finding and very specific drilling down regarding behavior that is relevant to the fitness of the person providing questions to the CCAT answers.

I think the CCAT will assist the ultimate trier of the fact, aid the court process, contribute to better expert opinion and it will do so quicker, more efficiently and more economically. However, time will tell...