Version Upgrades and Automated Testing

Do any of you have tips or ideas for using Orkestra to help with autonomously evaluating script performance for both development testing and upgrade/maintenance.

currently, i run scripts on different inputs manually and it’s just a labor intensive process. I’ve been reading some about unit testing in conventional languages and am trying to figure out ways i can do this better.

One thought i have is to have a dedicated test revit file for each script and maybe use a playlist associated with each known case, but that doesn’t give me a great trace to evaluate what works and what doesn’t. I think it would be useful to at least

the other is to take sort of an ‘older’ approach and try to automate testing with journal files? The trick is needing to switch through different inputs to validate known edge cases.

another issue that’s a bit more on the ‘upgrading’ side of the task is determining node usage, especially for targeting scripts still using IronPython2 or known issues with Autodesk nodes (the only one i’m currently aware of is the If node change from 2.10 to 2.13, but i’m sure there are others)

I know Orkestra can help with package distribution between different revit versions, but does the hub version affect other aspects of the script running? not sure if that’s a clear way to ask the question.

1 Like

This is an awesome topic @mclough , and one that’s been on our mind for some time.
Orkestra has the perfect infrastructure to make something like this happen.
The expérience we’d like to produce is exactly what you’ve described :

  • feed à revit simple project
  • select an orkestra workspace (with it’s definitions and package settings)
  • specify wich version of Revit to test on
  • get à detailed report of how things worked

The thing with unit testing is you need to specify the expected outcome for eatch method you’re testing. For this we could let you flag some relevant notes and enter somwhere the expected outcome .
Another thing that is important but hard to address is inputs. We’d need to figure out à way to feed default values to Dynamo input nodes, or Data Shapes node. Though we have ideas for this , it probably will be very hard to adress all cases. Unit testing is hard for UIs even in text based programming like C# and WPF. But not impossible !

Then the outcome wold be à report of wether the expexted outputs were achieved and, if you want to dig deeper, our game changing run inspector to see exactly how it went out!

For now, what I personally do is create a testing workspace with the right package environment for a giver revit version, then copy my definitions from the already working workdpaces , then I go through the definitions. When something breaks, I “open in Dynamo” , make the needed modifications and update the definition in Orkestra. When all is tested, I deploy it. It is manual and tedious but at least it give à well organised testing space with a specific package environment and it also allow to update files but keep the possibility to revers thanks to version control.

I’m confident we can make something happen in this space, and that it would be super valuable to all Dynamo lovers. We’ll try and surface some first éléments by the end of this year.

Would love to hear you (and any one interested in this) describe the best expérience you can think of to perform automated testing!

PS : I saw John Pierson work his magic on the If node , among other awesome script migration helpers :slight_smile: you should look it up !

There’s definitely some things i need to look into.

  • DynamoGraphMigrationAssistant
  • RevitTestFramework

What i am currently fixated on is the way 2.13 and 2.17 display the warnings for deprecated nodes:

i think in 2.13 you don’t get the Node Issue Help in the documentation browser, but to me this means that somewhere there’s a relation between node names/Ids and a warning message. I just need to figure out where these warning attributes are defined and build a list of nodes with their warnings and the related version. Then it seems like you should be able to search the text of the .dyn and generate a list of scripts that contain nodes with warnings without having to open each one.

again, this may be accomplished by the aforementioned tools, but i haven’t validated them yet. Really looking for something to help direct effort and avoid opening scripts that require no changes.

I wrote a very basic implementation of this idea. I haven’t succeeded in finding where the deprecated node status is flagged in the dynamo source code yet, so the list of FunctionSignatures has to be populated manually.

Extensions to the idea would be to add some different modes:

  • Analyze Packages
  • Enable/Disable node counts
  • Scan PythonScriptNodes code for deprecated/version specific implementations
import os
import json
import csv

def read_deprecated_methods(deprecated_methods_file):
    with open(deprecated_methods_file, 'r') as f:
        deprecated_methods = f.read().splitlines()
    return deprecated_methods

def read_json_file(json_file):
    with open(json_file, 'r', encoding = 'utf8') as f:
        #datas = f.read().splitlines()
        data = json.load(f)
    return data

def search_deprecated_methods( data, deprecated_methods):
    deprecated_found = []
    name = data.get('Name','')
    nodes = data.get('Nodes', [])
    for node in nodes:
        function_signature = node.get('FunctionSignature', '')
        node_type = node.get('NodeType','')
        if node_type == 'FunctionNode':
            if function_signature in deprecated_methods:
                deprecated_found.append(function_signature) 
        elif node_type == 'PythonScriptNode':
            engine = node.get('Engine','')
            if engine == 'IronPython2':
                deprecated_found.append(node_type)
    #if deprecated_found:
    #    print ("{0}: Found {1} Deprecated Methods".format(name,len(deprecated_found)))
    return deprecated_found

def find_files_with_extension(directory, extension):
    """Find all files in the directory with the given extension."""
    found_files = []
    for root, _, files in os.walk(directory):
        for file in files:
            if file.endswith(extension):
                found_files.append(os.path.join(root, file))
    return found_files

def search_file_methods (json_file, deprecated_methods):    
    data = read_json_file(json_file)
    name = data.get('Name','')
    searchResults = search_deprecated_methods(data, deprecated_methods)
    #print("Result: {0}".format(searchResults))
    return (name,list(set(searchResults)))

def write_data_to_csv(data, filename):
    with open(filename, 'w', newline='', encoding='utf-8') as csvfile:
        writer = csv.writer(csvfile)
        writer.writerow(["File Path", "Node Names"])  # writing the header
        for item in data:
            writer.writerow(item)

def write_data_to_txt(data, filename):
    with open(filename, 'w', encoding='utf-8') as txtfile:
        for item in data:
            txtfile.write(str(item) + '\n')

def convert_tuples_to_lists(data):
    new_data = []
    for item in data:
        new_item = [item[0]] + item[1]
        new_data.append(new_item)
    return new_data

def main(path,deprecated_methods_file):
    deprecated_methods = read_deprecated_methods(deprecated_methods_file)
    file_list = find_files_with_extension(path,'.dyn')
    print("{0} files found".format(len(file_list)))
    output = [search_file_methods(file,deprecated_methods) for file in file_list]
    filtered_output = list(filter(lambda x: x[-1] != [],output))
    print("Update nodes in {0} files".format(len(filtered_output)))
    
    write_data_to_csv (filtered_output, 'DynamoNodeAnalysis.csv')
1 Like

Thanks for sharing this here @mclough ! I’m sure this can be helpful to a lot of people trying to parse dynamo json files for this or other reasons!
This is also great for us. It gives us ideas on general content scan features we could implement in Orkestra .

something related that might be nice is a modification to Playlists to be able to define selection inputs before running, specify an output location, or record the run status (completed/with warnings)

This would help in the case that you have a script you want to ‘test’, which has some combination of inputs that you want to validate. If you can specify the inputs of the cases you need to test, and have some type of output, you could have a nice workflow to help direct attention when doing upgrades and during development.

I think this would be somewhat equivalent to configuring a bat file and running journals

1 Like